Search Blogs

Showing results for "algorithms"

Found 30 results

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction — Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it — welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story — Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer — the US Secretary of Defense, Charles Wilson — would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense — planning and decision-making — not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember — even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties — it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization — remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation — fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints — every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 — Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation — change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 — Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 — Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 — 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases — first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation — change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 — Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}💡 How to use these templates:Step 1 — Identify which pattern your problem fits into. Step 2 — Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 — Write your recurrence relation on paper first. Step 4 — Plug it into the matching template above. Step 5 — Handle your specific base cases carefully.🎥 Visual Learning Resource — Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp — a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 — 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 — 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 — Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 — Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 — DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 — DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 — DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner — Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate — Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced — Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance — a classic DP problem — between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms — both DP — to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints — essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm — used in speech recognition, part-of-speech tagging, and machine translation — is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines — minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential — terribleFibonacci (DP)O(n)O(1) with optimizationLinear — excellentLongest Common SubsequenceO(m × n)O(m × n)m, n = lengths of two stringsEdit DistanceO(m × n)O(m × n)Can optimize space to O(n)0/1 KnapsackO(n × W)O(n × W)n = items, W = capacityCoin ChangeO(n × amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(n²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(n³)O(n²)Interval DPTravelling Salesman (bitmask)O(2ⁿ × n²)O(2ⁿ × n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming — Step by StepHere is an honest, structured path to mastery:Step 1 — Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 — Start with the classics. Fibonacci → Climbing Stairs → House Robber → Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 — Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 — Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 — Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 — Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 — Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 — Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle — it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 — Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 — Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 — Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 — Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 — Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 — Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 — Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) — not just item index — leads to fundamentally wrong solutions.Mistake 8 — Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization — or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap — meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an n×m table with just two arrays (or even one), reducing space complexity from O(n×m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts — Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice — top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
Merge Sort Algorithm Explained | Java Implementation, Intuition & Complexity

Merge Sort Algorithm Explained | Java Implementation, Intuition & Complexity

IntroductionSorting is one of the most fundamental operations in computer science, and Merge Sort is among the most efficient and widely used sorting algorithms.It follows the Divide and Conquer approach, making it highly scalable and predictable even for large datasets.In this article, we will cover:Intuition behind Merge SortStep-by-step breakdownMultiple approachesJava implementation with commentsTime & space complexity analysis🔗 Problem LinkGeeksforGeeks: Merge SortProblem StatementGiven an array arr[] with starting index l and ending index r, sort the array using the Merge Sort algorithm.ExamplesExample 1Input:arr = [4, 1, 3, 9, 7]Output:[1, 3, 4, 7, 9]Example 2Input:arr = [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]Output:[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]Key InsightMerge Sort works by:Divide → Conquer → CombineDivide the array into two halvesRecursively sort each halfMerge both sorted halvesIntuition (Visual Understanding)For:[4, 1, 3, 9, 7]Step 1: Divide[4, 1, 3] [9, 7][4, 1] [3] [9] [7][4] [1]Step 2: Merge[4] [1] → [1, 4][1, 4] [3] → [1, 3, 4][9] [7] → [7, 9]Step 3: Final Merge[1, 3, 4] + [7, 9] → [1, 3, 4, 7, 9]Approach 1: Recursive Merge Sort (Top-Down)IdeaKeep dividing until single elements remainMerge sorted subarraysJava Codeclass Solution { // Function to merge two sorted halves void merge(int[] arr, int l, int mid, int h) { // Temporary array to store merged result int[] temp = new int[h - l + 1]; int i = l; // pointer for left half int j = mid + 1; // pointer for right half int k = 0; // pointer for temp array // Compare elements from both halves while (i <= mid && j <= h) { if (arr[i] <= arr[j]) { temp[k] = arr[i]; i++; } else { temp[k] = arr[j]; j++; } k++; } // Copy remaining elements from left half while (i <= mid) { temp[k] = arr[i]; i++; k++; } // Copy remaining elements from right half while (j <= h) { temp[k] = arr[j]; j++; k++; } // Copy sorted elements back to original array for (int m = 0; m < temp.length; m++) { arr[l + m] = temp[m]; } } // Recursive merge sort function void mergeSort(int arr[], int l, int h) { // Base case: single element if (l >= h) return; int mid = l + (h - l) / 2; // Sort left half mergeSort(arr, l, mid); // Sort right half mergeSort(arr, mid + 1, h); // Merge both halves merge(arr, l, mid, h); }}Approach 2: Iterative Merge Sort (Bottom-Up)IdeaStart with subarrays of size 1Merge pairsIncrease size graduallyCodeclass Solution { void merge(int[] arr, int l, int mid, int h) { int[] temp = new int[h - l + 1]; int i = l, j = mid + 1, k = 0; while (i <= mid && j <= h) { if (arr[i] <= arr[j]) temp[k++] = arr[i++]; else temp[k++] = arr[j++]; } while (i <= mid) temp[k++] = arr[i++]; while (j <= h) temp[k++] = arr[j++]; for (int m = 0; m < temp.length; m++) { arr[l + m] = temp[m]; } } void mergeSort(int[] arr, int n) { for (int size = 1; size < n; size *= 2) { for (int l = 0; l < n - size; l += 2 * size) { int mid = l + size - 1; int h = Math.min(l + 2 * size - 1, n - 1); merge(arr, l, mid, h); } } }}Approach 3: Using Built-in Sorting (For Comparison)Arrays.sort(arr);👉 Internally uses optimized algorithms (TimSort in Java)Complexity AnalysisTime ComplexityCaseComplexityBestO(n log n)AverageO(n log n)WorstO(n log n)Space ComplexityO(n) (extra array for merging)Why Merge Sort is PowerfulStable sorting algorithmWorks efficiently on large datasetsPredictable performanceUsed in external sorting (large files)❌ Why Not Use Bubble/Selection Sort?AlgorithmTime ComplexityBubble SortO(n²)Selection SortO(n²)Merge SortO(n log n) ✅Key TakeawaysMerge Sort uses divide and conquerRecursion splits problem into smaller partsMerging is the key stepAlways O(n log n), regardless of inputWhen to Use Merge SortLarge datasetsLinked lists (very efficient)Stable sorting requiredExternal sortingConclusionMerge Sort is one of the most reliable and efficient sorting algorithms. Understanding its recursive structure and merging process is essential for mastering advanced algorithms.Once you grasp the divide-and-conquer pattern, it becomes easier to solve many complex problems.Frequently Asked Questions (FAQs)1. Is Merge Sort stable?Yes, it maintains the relative order of equal elements.2. Why is extra space required?Because we use a temporary array during merging.3. Can it be done in-place?Not efficiently; standard merge sort requires extra space.

GeekfOfGeeksMediumSortingMerge SortJava
Search in Rotated Sorted Array II – Binary Search with Duplicates Explained (LeetCode 81)

Search in Rotated Sorted Array II – Binary Search with Duplicates Explained (LeetCode 81)

Try the QuestionBefore reading the explanation, try solving the problem yourself:👉 https://leetcode.com/problems/search-in-rotated-sorted-array-ii/Practicing the problem first helps develop stronger problem-solving intuition, especially for binary search variations.Problem StatementYou are given an integer array nums that is sorted in non-decreasing order.Example of a sorted array:[0,1,2,4,4,4,5,6,6,7]Before being passed to your function, the array may be rotated at some pivot index k.After rotation, the structure becomes:[nums[k], nums[k+1], ..., nums[n-1], nums[0], nums[1], ..., nums[k-1]]Example:Original array[0,1,2,4,4,4,5,6,6,7]Rotated at index 5[4,5,6,6,7,0,1,2,4,4]You are also given an integer target.Your task is to determine:Return true if the target exists in the arrayReturn false if the target does not existThe goal is to minimize the number of operations, which suggests using Binary Search.Example WalkthroughExample 1Inputnums = [2,5,6,0,0,1,2]target = 0OutputtrueExplanation:0 exists in the arrayExample 2Inputnums = [2,5,6,0,0,1,2]target = 3OutputfalseExplanation:3 does not exist in the arrayUnderstanding the Core ChallengeThis problem is very similar to the classic problem:Search in Rotated Sorted Array (LeetCode 33).However, there is an important difference.Difference Between the Two ProblemsProblemArray ValuesRotated Sorted ArrayAll elements are distinctRotated Sorted Array IIArray may contain duplicatesDuplicates introduce ambiguity during binary search.Why Duplicates Make the Problem HarderIn the previous problem, we relied on this rule:If nums[left] <= nums[mid]→ left half is sortedBut duplicates can break this assumption.Example:nums = [1,0,1,1,1]If:left = 0mid = 2right = 4Then:nums[left] = 1nums[mid] = 1nums[right] = 1Here we cannot determine which half is sorted.This is the main complication introduced by duplicates.Key Idea to Handle DuplicatesWhen the values at left, mid, and right are the same, the algorithm cannot decide which half is sorted.To resolve this situation, we shrink the search space:left++right--This gradually removes duplicate values and allows binary search to continue.Modified Binary Search StrategyThe algorithm works as follows:Step 1Calculate the middle index.Step 2If the middle element equals the target:return trueStep 3If duplicates block decision-making:nums[left] == nums[mid] == nums[right]Then move both pointers inward:left++right--Step 4Otherwise, determine which half is sorted and apply normal binary search logic.Java Implementationclass Solution { public boolean search(int[] nums, int target) { int l = 0; int r = nums.length - 1; while (l <= r) { int mid = l + (r - l) / 2; if (nums[mid] == target) return true; // Handle duplicates if (nums[l] == nums[mid] && nums[mid] == nums[r]) { l++; r--; } // Left half sorted else if (nums[l] <= nums[mid]) { if (nums[l] <= target && target < nums[mid]) { r = mid - 1; } else { l = mid + 1; } } // Right half sorted else { if (nums[mid] < target && target <= nums[r]) { l = mid + 1; } else { r = mid - 1; } } } return false; }}Step-by-Step ExampleArray:[2,5,6,0,0,1,2]target = 0Iteration 1mid = 6value = 0Target found immediately.return trueTime ComplexityBest CaseO(log n)When duplicates do not interfere with binary search decisions.Worst CaseO(n)When many duplicate values force the algorithm to shrink the search space one element at a time.Example worst case:[1,1,1,1,1,1,1,1,1]Binary search cannot divide the array effectively.Space ComplexityO(1)The algorithm only uses a few variables and does not require extra memory.Follow-Up: How Do Duplicates Affect Runtime?Without duplicates, binary search always reduces the search space by half.Time Complexity → O(log n)With duplicates, we sometimes cannot determine which half is sorted.In such cases, we shrink the search space linearly:left++right--This may degrade performance to:Worst Case → O(n)However, in most practical cases the algorithm still performs close to logarithmic time.Key Takeaways✔ The array is sorted but rotated✔ Duplicates introduce ambiguity in binary search✔ Special handling is required when nums[left] == nums[mid] == nums[right]✔ The algorithm combines binary search with duplicate handling✔ Worst-case complexity may degrade to O(n)Final ThoughtsThis problem is a natural extension of the rotated sorted array search problem. It tests your ability to adapt binary search to more complex conditions.Understanding this pattern is valuable because similar techniques appear in many interview problems involving:Rotated arraysBinary search edge casesHandling duplicates in sorted data structuresMastering this approach strengthens both algorithmic thinking and interview preparation.

LeetCodeBinary SearchRotated Sorted ArrayJavaMedium
Maximum Absolute Sum of Any Subarray

Maximum Absolute Sum of Any Subarray

LeetCode Problem 1749Link of the Problem to try -: LinkYou are given an integer array nums. The absolute sum of a subarray [numsl, numsl+1, ..., numsr-1, numsr] is abs(numsl + numsl+1 + ... + numsr-1 + numsr).Return the maximum absolute sum of any (possibly empty) subarray of nums.Note that abs(x) is defined as follows:If x is a negative integer, then abs(x) = -x.If x is a non-negative integer, then abs(x) = x.Example 1:Input: nums = [1,-3,2,3,-4]Output: 5Explanation: The subarray [2,3] has absolute sum = abs(2+3) = abs(5) = 5.Example 2:Input: nums = [2,-5,1,-4,3,-2]Output: 8Explanation: The subarray [-5,1,-4] has absolute sum = abs(-5+1-4) = abs(-8) = 8.Constraints:1 <= nums.length <= 105-104 <= nums[i] <= 104My ThinkingSo, in this question what exactly we have to do what actual logic behind it to solve this question is that like you need to first find out the maximum sum of sub array via kadane's algorithm also find the minimum sum of subarray and then compare both numbers ignore the negative signs of it and then return the maximum number that is what the question want and we have to solve in it.Kadane's AlgorithmThis algorithm is highly efficient, solving the problem in a single pass with a time complexity of O(n), not O(1) constant time, as the algorithm must iterate through all n elements of the input array.The core principle is based on the idea that any negative prefix sum acts as a 'debt' that subsequent positive numbers must overcome. The strategy is to always pursue the maximum sum. If the running sum becomes negative at any point, it is reset to zero, effectively 'discarding' the previous negative sequence, and the search for a new maximum sum begins with the next element.Here is a video for more better understandingSolution:Here is the solution for this quesiton in java by Kadane's Algorithmpublic int maxAbsoluteSum(int[] nums) {int max = Integer.MIN_VALUE;int min = Integer.MAX_VALUE;int maxSum=0;int minSum=0;for(int i =0 ; i< nums.length;i++){maxSum = Math.max(maxSum+nums[i],nums[i]);max = Math.max(max,maxSum);minSum = Math.min(minSum+nums[i],nums[i]);min = Math.min(min,minSum);}return Math.max(max,Math.abs(min));}

leetcodemediumkadane algorithm
Mastering Binary Search – LeetCode 704 Explained

Mastering Binary Search – LeetCode 704 Explained

IntroductionBinary Search is one of the most fundamental and powerful algorithms in computer science. If you're preparing for coding interviews, mastering Binary Search is absolutely essential.In this blog, we’ll break down LeetCode 704 – Binary Search, explain the algorithm in detail, walk through your Java implementation, analyze complexity, and recommend additional problems to strengthen your understanding.You can try this problem -: Problem Link📌 Problem OverviewYou are given:A sorted array of integers nums (ascending order)An integer targetYour task is to return the index of target if it exists in the array. Otherwise, return -1.Example 1Input: nums = [-1,0,3,5,9,12], target = 9Output: 4Example 2Input: nums = [-1,0,3,5,9,12], target = 2Output: -1Constraints1 <= nums.length <= 10⁴All integers are uniqueThe array is sorted in ascending orderRequired Time Complexity: O(log n)🚀 Understanding the Binary Search AlgorithmBinary Search works only on sorted arrays.Instead of checking each element one by one (like Linear Search), Binary Search:Finds the middle element.Compares it with the target.Eliminates half of the search space.Repeats until the element is found or the search space is empty.Why is it Efficient?Every iteration cuts the search space in half.If the array size is n, the number of operations becomes:log₂(n)This makes it extremely efficient compared to linear search (O(n)).🧠 Step-by-Step AlgorithmInitialize two pointers:low = 0high = nums.length - 1While low <= high:Calculate middle index:mid = low + (high - low) / 2If nums[mid] == target, return midIf target > nums[mid], search right half → low = mid + 1Else search left half → high = mid - 1If loop ends, return -1💻 Your Java Code ExplainedHere is your implementation:class Solution {public int search(int[] nums, int target) {int high = nums.length-1;int low = 0;while(low <= high){int mid = low+(high-low)/2;if(target == nums[mid] ){return mid;}else if(target > nums[mid]){low = mid+1;}else{high = mid-1;}}return -1;}}🔍 Code Breakdown1️⃣ Initialize Boundariesint high = nums.length - 1;int low = 0;You define the search space from index 0 to n-1.2️⃣ Loop Conditionwhile(low <= high)The loop continues as long as there is a valid search range.3️⃣ Safe Mid Calculationint mid = low + (high - low) / 2;This is preferred over:(low + high) / 2Why?Because (low + high) may cause integer overflow in large arrays.Your approach prevents that.4️⃣ Comparison Logicif(target == nums[mid])If found → return index.else if(target > nums[mid])low = mid + 1;Search in right half.elsehigh = mid - 1;Search in left half.5️⃣ Not Found Casereturn -1;If the loop finishes without finding the target.⏱ Time and Space ComplexityTime Complexity: O(log n)Each iteration halves the search space.Space Complexity: O(1)No extra space used — purely iterative.🔥 Why This Problem Is ImportantLeetCode 704 is:The foundation of all Binary Search problemsA template questionFrequently asked in interviewsRequired to understand advanced problems like:Search in Rotated Sorted ArrayFind First and Last PositionPeak ElementBinary Search on Answer📚 Recommended Binary Search Practice ProblemsAfter solving this, practice these in order:🟢 Easy35. Search Insert Position69. Sqrt(x)278. First Bad Version🟡 Medium34. Find First and Last Position of Element in Sorted Array33. Search in Rotated Sorted Array74. Search a 2D Matrix875. Koko Eating Bananas (Binary Search on Answer)🔴 Advanced Pattern Practice1011. Capacity To Ship Packages Within D Days410. Split Array Largest SumThese will help you master:Lower bound / upper boundBinary search on monotonic functionsSearching in rotated arraysSearching in 2D matricesBinary search on answer pattern🎯 Final ThoughtsBinary Search is not just a single algorithm — it’s a pattern.If you truly understand:How the search space shrinksWhen to move left vs rightHow to calculate mid safelyLoop conditions (low <= high vs low < high)You can solve 50+ interview problems easily.LeetCode 704 is the perfect starting point.Master this template, and you unlock an entire category of problems.

Binary SearchLeetCodeEasy
Quick Sort Algorithm Explained | Java Implementation, Partition Logic & Complexity

Quick Sort Algorithm Explained | Java Implementation, Partition Logic & Complexity

IntroductionQuick Sort is one of the most powerful and widely used sorting algorithms in computer science. It follows the Divide and Conquer approach and is known for its excellent average-case performance.What makes Quick Sort special is:It sorts in-place (no extra array required)It is faster in practice than many O(n log n) algorithms like Merge SortIt is heavily used in real-world systems and librariesIn this article, we’ll go deep into:Intuition behind Quick SortPartition logic (most important part)Step-by-step dry runJava implementation with commentsTime complexity analysisCommon mistakes and optimizations🔗 Problem LinkGeeksforGeeks: Quick SortProblem StatementGiven an array arr[], sort it in ascending order using Quick Sort.Requirements:Use Divide and ConquerChoose pivot elementPlace pivot in correct positionElements smaller → left sideElements greater → right sideExamplesExample 1Input:arr = [4, 1, 3, 9, 7]Output:[1, 3, 4, 7, 9]Example 2Input:arr = [2, 1, 6, 10, 4, 1, 3, 9, 7]Output:[1, 1, 2, 3, 4, 6, 7, 9, 10]Core Idea of Quick SortPick a pivot → Place it correctly → Recursively sort left & right🔥 Key Insight (Partition is Everything)Quick Sort depends entirely on partitioning:👉 After partition:Pivot is at its correct sorted positionLeft side → smaller elementsRight side → larger elementsIntuition (Visual Understanding)Consider:[4, 1, 3, 9, 7]Step 1: Choose PivotLet’s say pivot = 4Step 2: Rearrange Elements[1, 3] 4 [9, 7]Now:Left → smallerRight → largerStep 3: Apply RecursivelyLeft: [1, 3]Right: [9, 7]Final result:[1, 3, 4, 7, 9]Partition Logic (Most Important)Your implementation uses:Pivot = first elementTwo pointers:i → moves forwardj → moves backwardJava Codeclass Solution { public void quickSort(int[] arr, int low, int high) { // Base case: if array has 1 or 0 elements if (low < high) { // Partition array and get pivot index int pivotInd = partition(arr, low, high); // Sort left part quickSort(arr, low, pivotInd - 1); // Sort right part quickSort(arr, pivotInd + 1, high); } } // Function to swap two elements void swap(int[] arr, int i, int j) { int temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; } private int partition(int[] arr, int low, int high) { int pivot = arr[low]; // choosing first element as pivot int i = low + 1; // start from next element int j = high; // start from end while (i <= j) { // Move i forward until element > pivot while (i <= high && arr[i] <= pivot) { i++; } // Move j backward until element <= pivot while (j >= low && arr[j] > pivot) { j--; } // Swap if pointers haven't crossed if (i < j) { swap(arr, i, j); } } // Place pivot at correct position swap(arr, low, j); return j; // return pivot index }}Step-by-Step Dry RunInput:[4, 1, 3, 9, 7]Execution:Pivot = 4i → moves until element > 4j → moves until element ≤ 4Swaps happen → pivot placed correctlyFinal partition:[1, 3, 4, 9, 7]Complexity AnalysisTime ComplexityCaseComplexityBest CaseO(n log n)Average CaseO(n log n)Worst CaseO(n²)Why Worst Case Happens?When array is:Already sortedReverse sortedPivot always becomes smallest/largest.Space ComplexityO(log n) (recursion stack)❌ Common MistakesWrong partition logicInfinite loops in while conditionsIncorrect pivot placementNot handling duplicates properly⚡ Optimizations1. Random PivotAvoid worst-case:int pivotIndex = low + new Random().nextInt(high - low + 1);swap(arr, low, pivotIndex);2. Median of ThreeChoose better pivot:median(arr[low], arr[mid], arr[high])Quick Sort vs Merge SortFeatureQuick SortMerge Sort link to get moreSpaceO(log n)O(n)SpeedFaster (practical)StableWorst CaseO(n²)O(n log n)Why Quick Sort is PreferredCache-friendlyIn-place sortingFaster in real-world scenariosKey TakeawaysPartition is the heart of Quick SortPivot must be placed correctlyRecursion splits problem efficientlyAvoid worst case using random pivotWhen to Use Quick SortLarge arraysMemory constraints (in-place)Performance-critical applicationsConclusionQuick Sort is one of the most efficient and practical sorting algorithms. Mastering its partition logic is crucial for solving advanced problems and performing well in coding interviews.Understanding how pointers move and how pivot is placed will make this algorithm intuitive and powerful.Frequently Asked Questions (FAQs)1. Is Quick Sort stable?No, it is not stable.2. Why is Quick Sort faster than Merge Sort?Because it avoids extra space and is cache-efficient.3. What is the most important part?👉 Partition logic

MediumJavaSortingQuick SortGeeksofGeeks
Search in Rotated Sorted Array – Binary Search Explained with Java Solution (LeetCode 33)

Search in Rotated Sorted Array – Binary Search Explained with Java Solution (LeetCode 33)

Try the QuestionBefore reading the solution, try solving the problem yourself:👉 https://leetcode.com/problems/search-in-rotated-sorted-array/Attempting the problem first helps build strong algorithmic intuition, which is extremely valuable during coding interviews.Problem StatementYou are given an integer array nums that was originally sorted in ascending order with distinct elements.Example of a sorted array:[0,1,2,4,5,6,7]Before the array is provided to the function, it may have been rotated at some pivot index k.After rotation, the array structure becomes:[nums[k], nums[k+1], ..., nums[n-1], nums[0], nums[1], ..., nums[k-1]]For example:Original Array[0,1,2,4,5,6,7]Rotated by 3 positions[4,5,6,7,0,1,2]You are also given an integer target.The goal is to return the index of the target element in the array.If the element does not exist, return:-1Important ConstraintThe algorithm must run in O(log n) time complexity, which strongly suggests using Binary Search.Example WalkthroughExample 1Inputnums = [4,5,6,7,0,1,2]target = 0Output4Explanation:0 exists at index 4Example 2Inputnums = [4,5,6,7,0,1,2]target = 3Output-1Explanation:3 does not exist in the arrayExample 3Inputnums = [1]target = 0Output-1Understanding the Core ChallengeIf the array were fully sorted, the problem would be straightforward because binary search could directly be applied.Example:[1,2,3,4,5,6,7]However, due to rotation:[4,5,6,7,0,1,2]the array is no longer globally sorted.But an important observation makes the problem solvable.Key ObservationEven after rotation:At least one half of the array is always sorted.For example:[4,5,6,7,0,1,2] Left part → sorted Right part → sortedThis property allows the use of binary search with additional conditions.Approach 1 — Linear Scan (Brute Force)The simplest method is to iterate through the entire array and check each element.AlgorithmTraverse the array from start to end.Compare every element with the target.Return the index if found.Codefor(int i = 0; i < nums.length; i++){ if(nums[i] == target){ return i; }}return -1;Time ComplexityO(n)Space ComplexityO(1)Although simple, this solution does not satisfy the required O(log n) complexity.Approach 2 — Modified Binary SearchA better solution uses binary search with sorted half detection.IdeaAt every step:Calculate the middle index.Determine which half of the array is sorted.Check if the target lies inside that sorted half.Adjust the search range accordingly.Implementationpublic int search(int[] nums, int target) { int l = 0; int r = nums.length - 1; while(l <= r){ int mid = l + (r - l) / 2; if(nums[mid] == target){ return mid; } // left half sorted if(nums[l] <= nums[mid]){ if(nums[l] <= target && target < nums[mid]){ r = mid - 1; }else{ l = mid + 1; } } // right half sorted else{ if(nums[mid] < target && target <= nums[r]){ l = mid + 1; }else{ r = mid - 1; } } } return -1;}Time ComplexityO(log n)Space ComplexityO(1)This is the most common interview solution.Approach 3 — Find Rotation Point Then Apply Binary SearchAnother elegant strategy is to first locate the minimum element in the rotated array, which represents the rotation index (pivot).Once the pivot is known, the array can be logically split into two sorted subarrays.Example:[4,5,6,7,0,1,2] ^ pivotTwo sorted sections exist:[4,5,6,7][0,1,2]After identifying the pivot:Decide which part may contain the target.Apply standard binary search on that portion.Step 1 — Finding the Minimum Element (Rotation Pivot)The smallest element indicates where the rotation happened.Function to Find Minimum Valuepublic int findMinIndex(int[] nums){ int l = 0; int r = nums.length - 1; while(l < r){ int mid = l + (r - l) / 2; if(nums[mid] > nums[r]){ l = mid + 1; }else{ r = mid; } } return l;}Why This WorksIf:nums[mid] > nums[r]the minimum element must be on the right side.Otherwise, it lies on the left side (including mid).Step 2 — Standard Binary SearchAfter determining which half contains the target, a normal binary search is applied.public int binarySearch(int[] nums, int l, int r, int target){ while(l <= r){ int mid = l + (r - l) / 2; if(nums[mid] == target){ return mid; } else if(nums[mid] < target){ l = mid + 1; } else{ r = mid - 1; } } return -1;}Complete Solution (Pivot + Binary Search)class Solution { public int findMinIndex(int[] nums){ int l = 0; int r = nums.length - 1; while(l < r){ int mid = l + (r - l) / 2; if(nums[mid] > nums[r]){ l = mid + 1; }else{ r = mid; } } return l; } public int binarySearch(int[] nums, int l, int r, int target){ while(l <= r){ int mid = l + (r - l) / 2; if(nums[mid] == target){ return mid; } else if(nums[mid] < target){ l = mid + 1; } else{ r = mid - 1; } } return -1; } public int search(int[] nums, int target) { int pivot = findMinIndex(nums); if(nums[pivot] <= target && target <= nums[nums.length - 1]){ return binarySearch(nums, pivot, nums.length - 1, target); } return binarySearch(nums, 0, pivot - 1, target); }}Time ComplexityFinding pivot:O(log n)Binary search:O(log n)Total complexity:O(log n)Space ComplexityO(1)No additional memory is used.Key Takeaways✔ The array is sorted but rotated✔ A rotation creates two sorted sections✔ Binary search can still be applied✔ Either detect the sorted half directly or locate the pivot first✔ Both optimized approaches achieve O(log n) complexityFinal ThoughtsThis problem is a classic binary search variation frequently asked in coding interviews. It evaluates the ability to:Recognize structural patterns in arraysAdapt binary search to non-standard conditionsMaintain optimal algorithmic complexityUnderstanding this pattern also helps solve related problems such as:Find Minimum in Rotated Sorted ArraySearch in Rotated Sorted Array IIFind Rotation Count in ArrayMastering these concepts significantly strengthens binary search problem-solving skills for technical interviews.

Binary SearchJavaRotated Sorted ArrayLeetCodeMedium
Sort Colors

Sort Colors

LeetCode Problem 75 Link of the Problem to try -: LinkProblem Statement :- Given an array nums with n objects colored red, white, or blue, sort them in-place so that objects of the same color are adjacent, with the colors in the order red, white, and blue.We will use the integers 0, 1, and 2 to represent the color red, white, and blue, respectively.Example 1:Input: nums = [2,0,2,1,1,0]Output: [0,0,1,1,2,2]Example 2:Input: nums = [2,0,1]Output: [0,1,2]You must solve this problem without using the library's sort function.My Approach (1)I have solved this question via my approach by counting frequency of each element like 0,1 and 2 as I avoid to use nested loops but i used for loops multiple times but as loops are used in a constant space of time that's why they did not increases the time complexity that's why my solution time complexity is O(n) even though I need to traverse array multiple time.Here is the Approach:int zeroCounter= 0;int oneCounter = 0;int twoCounter=0;int counter =0;for(int i =0; i< nums.length; i++){if(nums[i] == 0){zeroCounter++;}else if(nums[i] == 1){oneCounter++;}else{twoCounter++;}}for(int i=0; i<zeroCounter;i++){nums[i] = 0;counter++;}for(int i=0; i<oneCounter;i++){nums[counter] = 1;counter++;}for(int i=0; i<twoCounter;i++){nums[counter] = 2;counter++;}My Approach (2)This approach uses a algorithm called DNF (Dutch National Flag) Algorithm in this algorithm we have to focus on two elements out of three and make sure those two element are on the correct place as the last one came automatically to the correct position even though my approach(1) is uses loops multiple time but this approach is also take O(n) time complexity but uses loop one time that's why this approach is far cleaner than approach (1).int low =0;int high= nums.length-1;int curr =0;while(curr <= high){if(nums[curr] == 0){int temp = nums[low];nums[low] = nums[curr];nums[curr] = temp;low++;curr++;}else if( nums[curr] == 1){curr++;}else{int temp = nums[curr];nums[curr] = nums[high];nums[high] = temp;high--;}}

LeetCodeMediumTwo PointerDutchman Flag Algorithm
Queue Data Structure Complete Guide - Java Explained With All Operations

Queue Data Structure Complete Guide - Java Explained With All Operations

IntroductionIf you have been learning Data Structures and Algorithms, you have probably already spent time with arrays, linked lists, and stacks. Now it is time to meet one of the most important and widely used data structures in computer science — the Queue.Queue is not just a theoretical concept. It powers some of the most critical systems you use every day — from how your printer handles jobs, to how your CPU schedules tasks, to how Google Maps finds the shortest path between two locations. Understanding Queue deeply means understanding how real systems work.In this complete guide we will cover absolutely everything — what a Queue is, how it differs from a Stack, every type of Queue, all operations with code, Java implementations, time and space complexity, common interview questions, and the most important LeetCode problems that use Queue.What Is a Queue?A Queue is a linear data structure that follows the FIFO principle — First In First Out. This means the element that was added first is the one that gets removed first.Think of it exactly like a real-world queue (a line of people). The person who joined the line first gets served first. No cutting in line, no serving from the back — strict order from front to back.This is the fundamental difference between a Queue and a Stack:Stack → LIFO (Last In First Out) — like a stack of plates, you take from the topQueue → FIFO (First In First Out) — like a line of people, you serve from the frontReal Life Examples of QueueBefore writing a single line of code, let us understand where queues appear in real life. This will make every technical concept feel natural.Printer Queue — when you send multiple documents to print, they print in the order they were sent. The first document sent prints first.CPU Task Scheduling — your operating system manages running processes in a queue. Tasks get CPU time in the order they arrive (in basic scheduling).Customer Service Call Center — when you call a helpline and are put on hold, you are placed in a queue. The first caller on hold gets connected first.WhatsApp Messages — messages are delivered in the order they are sent. The first message sent is the first one received.BFS (Breadth First Search) — every time you use Google Maps or any navigation app to find the shortest path, it uses BFS internally which is entirely powered by a Queue.Ticket Booking Systems — online booking portals process requests in the order they arrive. First come first served.Queue Terminology — Key Terms You Must KnowBefore diving into code, let us get the vocabulary right:Front — the end from which elements are removed (dequeued). This is where the "first person in line" stands.Rear (or Back) — the end at which elements are added (enqueued). New arrivals join here.Enqueue — the operation of adding an element to the rear of the queue. Like joining the back of a line.Dequeue — the operation of removing an element from the front of the queue. Like the first person in line being served and leaving.Peek (or Front) — looking at the front element without removing it. Like seeing who is first in line without serving them yet.isEmpty — checking whether the queue has no elements.isFull — relevant for fixed-size queues, checking whether no more elements can be added.Types of QueuesThis is where most beginners get confused. There is not just one type of Queue — there are several variations each designed to solve specific problems.1. Simple Queue (Linear Queue)The most basic form. Elements enter from the rear and leave from the front. Strict FIFO, nothing fancy.Enqueue → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue rear frontProblem with Simple Queue: In array-based implementation, once elements are dequeued from the front, those slots cannot be reused even if there is space. This wastes memory. This is why Circular Queue was invented.2. Circular QueueIn a Circular Queue, the rear wraps around to the front when it reaches the end of the array. The last position connects back to the first, forming a circle. This solves the wasted space problem of simple queues. [1] [2] [3] / \ [6] [4] \ / [5] ← rearUsed in: CPU scheduling, memory management, traffic light systems, streaming buffers.3. Double Ended Queue (Deque)A Deque (pronounced "deck") allows insertion and deletion from both ends — front and rear. It is the most flexible queue type.Enqueue Front → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue FrontEnqueue Rear → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue RearTwo subtypes:Input Restricted Deque — insertion only at rear, deletion from both endsOutput Restricted Deque — deletion only at front, insertion at both endsUsed in: browser history (back and forward), undo-redo operations, sliding window problems.4. Priority QueueElements are not served in FIFO order — instead each element has a priority and the element with the highest priority is served first regardless of when it was added.Think of an emergency room. A patient with a critical injury jumps ahead of someone with a minor cut even if they arrived later.Two types:Max Priority Queue — highest value = highest priorityMin Priority Queue — lowest value = highest priorityUsed in: Dijkstra's shortest path, Huffman encoding, A* search algorithm, task scheduling with priorities.5. Blocking QueueA thread-safe queue used in multi-threading. If the queue is empty, a thread trying to dequeue will wait (block) until an element is available. If the queue is full, a thread trying to enqueue will wait until space is available.Used in: Producer-Consumer problems, thread pool implementations, Java's java.util.concurrent package.Queue Operations and Time ComplexityEvery queue operation has a specific time complexity that you must know cold for interviews.OperationDescriptionTime ComplexityEnqueueAdd element to rearO(1)DequeueRemove element from frontO(1)Peek/FrontView front elementO(1)isEmptyCheck if queue is emptyO(1)SizeNumber of elementsO(1)SearchFind a specific elementO(n)Space Complexity: O(n) — where n is the number of elements stored.All core queue operations are O(1). This is what makes Queue so powerful — no matter how many elements are in the queue, adding and removing always takes constant time.Implementing Queue in Java — All WaysJava gives you multiple ways to use a Queue. Let us go through each one.Way 1: Using LinkedList (Most Common)LinkedList implements the Queue interface in Java. This is the most commonly used Queue implementation.import java.util.LinkedList;import java.util.Queue;Queue<Integer> queue = new LinkedList<>();// Enqueue — add to rearqueue.offer(10);queue.offer(20);queue.offer(30);// Peek — view front without removingSystem.out.println(queue.peek()); // 10// Dequeue — remove from frontSystem.out.println(queue.poll()); // 10System.out.println(queue.poll()); // 20// Check emptySystem.out.println(queue.isEmpty()); // false// SizeSystem.out.println(queue.size()); // 1offer() vs add() — both add to the queue. add() throws an exception if the queue is full (for bounded queues). offer() returns false instead. Always prefer offer().poll() vs remove() — both remove from front. remove() throws an exception if queue is empty. poll() returns null. Always prefer poll().peek() vs element() — both view the front. element() throws exception if empty. peek() returns null. Always prefer peek().Way 2: Using ArrayDeque (Fastest)ArrayDeque is faster than LinkedList for Queue operations because it uses a resizable array internally with no node allocation overhead.import java.util.ArrayDeque;import java.util.Queue;Queue<Integer> queue = new ArrayDeque<>();queue.offer(1);queue.offer(2);queue.offer(3);System.out.println(queue.peek()); // 1System.out.println(queue.poll()); // 1System.out.println(queue.size()); // 2When to use ArrayDeque over LinkedList? Use ArrayDeque whenever possible for Queue or Stack operations. It is faster because it avoids the overhead of node objects that LinkedList creates for every element. In competitive programming and interviews, ArrayDeque is the preferred choice.Way 3: Using Deque (Double Ended Queue)import java.util.ArrayDeque;import java.util.Deque;Deque<Integer> deque = new ArrayDeque<>();// Add to frontdeque.offerFirst(10);// Add to reardeque.offerLast(20);deque.offerLast(30);// Remove from frontSystem.out.println(deque.pollFirst()); // 10// Remove from rearSystem.out.println(deque.pollLast()); // 30// Peek front and rearSystem.out.println(deque.peekFirst()); // 20System.out.println(deque.peekLast()); // 20Way 4: Using PriorityQueueimport java.util.PriorityQueue;// Min Heap — smallest element has highest priorityPriorityQueue<Integer> minPQ = new PriorityQueue<>();minPQ.offer(30);minPQ.offer(10);minPQ.offer(20);System.out.println(minPQ.poll()); // 10 — smallest comes out first// Max Heap — largest element has highest priorityPriorityQueue<Integer> maxPQ = new PriorityQueue<>((a, b) -> b - a);maxPQ.offer(30);maxPQ.offer(10);maxPQ.offer(20);System.out.println(maxPQ.poll()); // 30 — largest comes out firstWay 5: Implementing Queue From Scratch Using ArrayUnderstanding the underlying implementation helps you in interviews when asked to build one from scratch.class MyQueue { private int[] arr; private int front; private int rear; private int size; private int capacity; public MyQueue(int capacity) { this.capacity = capacity; arr = new int[capacity]; front = 0; rear = -1; size = 0; } public void enqueue(int val) { if (size == capacity) { System.out.println("Queue is full!"); return; } rear = (rear + 1) % capacity; // circular wrapping arr[rear] = val; size++; } public int dequeue() { if (isEmpty()) { System.out.println("Queue is empty!"); return -1; } int val = arr[front]; front = (front + 1) % capacity; // circular wrapping size--; return val; } public int peek() { if (isEmpty()) return -1; return arr[front]; } public boolean isEmpty() { return size == 0; } public int size() { return size; }}Notice the % capacity in enqueue and dequeue — that is what makes it a Circular Queue. Without this, once the rear reaches the end of the array, you cannot add more even if front has moved forward and freed up space.Way 6: Implementing Queue Using Two StacksThis is a very popular interview question — implement a Queue using two stacks. The idea is to use one stack for enqueue and another for dequeue.class QueueUsingTwoStacks { Stack<Integer> s1 = new Stack<>(); // for enqueue Stack<Integer> s2 = new Stack<>(); // for dequeue public void enqueue(int val) { s1.push(val); // always push to s1 } public int dequeue() { if (s2.isEmpty()) { // transfer all elements from s1 to s2 // this reverses the order, giving FIFO behavior while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.pop(); } public int peek() { if (s2.isEmpty()) { while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.peek(); } public boolean isEmpty() { return s1.isEmpty() && s2.isEmpty(); }}Why does this work?When you transfer elements from s1 to s2, the order reverses. The element that was added first to s1 ends up on top of s2 — which means it gets dequeued first. FIFO achieved using two LIFOs!Amortized time complexity: Each element is pushed and popped at most twice (once in s1, once in s2). So dequeue is O(1) amortized even though individual calls might take O(n).This is LeetCode 232 — Implement Queue using Stacks.Queue vs Stack — Side by SideFeatureQueueStackPrincipleFIFO — First In First OutLIFO — Last In First OutInsert atRearTopRemove fromFrontTopReal lifeLine of peopleStack of platesJava classLinkedList, ArrayDequeStack, ArrayDequeMain useBFS, schedulingDFS, backtracking, parsingPeekFront elementTop elementBFS — The Most Important Application of QueueBreadth First Search (BFS) is the single most important algorithm that uses a Queue. Understanding BFS is why Queue matters so much in DSA.BFS explores a graph or tree level by level — all nodes at distance 1 first, then all at distance 2, and so on. A Queue naturally enforces this level-by-level behavior.public void bfs(int start, List<List<Integer>> graph) { Queue<Integer> queue = new LinkedList<>(); boolean[] visited = new boolean[graph.size()]; queue.offer(start); visited[start] = true; while (!queue.isEmpty()) { int node = queue.poll(); // process front node System.out.print(node + " "); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); // add unvisited neighbors to rear } } }}Why Queue and not Stack for BFS? Queue ensures you process all neighbors of a node before going deeper. Stack would take you deep into one path first — that is DFS, not BFS. The FIFO property is what guarantees level-by-level exploration.BFS with Queue is used in:Shortest path in unweighted graphsLevel order traversal of treesFinding connected componentsWord ladder problemsRotten oranges, flood fill, and matrix BFS problemsLevel Order Traversal — BFS on TreesOne of the most common Queue problems in interviews is Level Order Traversal of a binary tree.public List<List<Integer>> levelOrder(TreeNode root) { List<List<Integer>> result = new ArrayList<>(); if (root == null) return result; Queue<TreeNode> queue = new LinkedList<>(); queue.offer(root); while (!queue.isEmpty()) { int levelSize = queue.size(); // number of nodes at current level List<Integer> level = new ArrayList<>(); for (int i = 0; i < levelSize; i++) { TreeNode node = queue.poll(); level.add(node.val); if (node.left != null) queue.offer(node.left); if (node.right != null) queue.offer(node.right); } result.add(level); } return result;}The key trick here is using queue.size() at the start of each while loop iteration to know exactly how many nodes belong to the current level. Process exactly that many nodes, then move to the next level.This is LeetCode 102 — Binary Tree Level Order Traversal.Sliding Window Maximum — Monotonic DequeOne of the most impressive Queue applications is the Sliding Window Maximum problem using a Monotonic Deque. This is the queue equivalent of the Monotonic Stack pattern you saw in stack problems.The idea — maintain a deque that stores indices of elements in decreasing order. The front always holds the index of the maximum element in the current window.public int[] maxSlidingWindow(int[] nums, int k) { Deque<Integer> deque = new ArrayDeque<>(); // stores indices int[] result = new int[nums.length - k + 1]; int idx = 0; for (int i = 0; i < nums.length; i++) { // remove indices that are out of the current window while (!deque.isEmpty() && deque.peekFirst() < i - k + 1) { deque.pollFirst(); } // remove indices whose values are smaller than current // they can never be the maximum for any future window while (!deque.isEmpty() && nums[deque.peekLast()] < nums[i]) { deque.pollLast(); } deque.offerLast(i); // window is fully formed, record maximum (front of deque) if (i >= k - 1) { result[idx++] = nums[deque.peekFirst()]; } } return result;}This gives O(n) time for what would otherwise be an O(n×k) problem. This is LeetCode 239 — Sliding Window Maximum.Java Queue Interface — Complete Method ReferenceHere is every method you will ever need from Java's Queue and Deque interfaces:Queue Methods:offer(e) — add to rear, returns false if full (preferred over add) poll() — remove from front, returns null if empty (preferred over remove) peek() — view front without removing, returns null if empty (preferred over element) isEmpty() — returns true if no elements size() — returns number of elements contains(o) — returns true if element existsDeque Additional Methods:offerFirst(e) — add to front offerLast(e) — add to rear pollFirst() — remove from front pollLast() — remove from rear peekFirst() — view front peekLast() — view rearPriorityQueue Specific:offer(e) — add with natural ordering or custom comparator poll() — remove element with highest priority peek() — view highest priority element without removingCommon Interview Questions About QueueThese are the questions interviewers ask to test your understanding of queues conceptually — not just coding.Q1. What is the difference between Queue and Stack? Queue is FIFO — elements are removed in the order they were added. Stack is LIFO — the most recently added element is removed first. Queue removes from the front, Stack removes from the top.Q2. Why is ArrayDeque preferred over LinkedList for Queue in Java? ArrayDeque uses a resizable array internally and has better cache locality and no node allocation overhead. LinkedList creates a new node object for every element added, which means more garbage collection pressure. ArrayDeque is faster in practice for most Queue use cases.Q3. When would you use a PriorityQueue instead of a regular Queue? When the order of processing depends on priority rather than arrival order. For example in a hospital, critical patients are treated before minor cases regardless of when they arrived. Or in Dijkstra's algorithm, always processing the shortest known distance first.Q4. How is Queue used in BFS? BFS uses a Queue to explore nodes level by level. The starting node is enqueued first. Each time a node is dequeued, all its unvisited neighbors are enqueued. Since Queue is FIFO, all neighbors of a node are processed before going deeper — guaranteeing level-by-level exploration.Q5. What is the difference between poll() and remove() in Java Queue? Both remove the front element. remove() throws NoSuchElementException if the queue is empty. poll() returns null instead of throwing. Always use poll() for safer code.Q6. Can a Queue have duplicates? Yes. Queue does not have any restriction on duplicate values unlike Sets. The same value can appear multiple times in a Queue.Q7. What is a Blocking Queue and when is it used? A Blocking Queue is a thread-safe Queue used in multi-threaded applications. When a thread tries to dequeue from an empty queue, it blocks (waits) until an element is available. When a thread tries to enqueue into a full queue, it blocks until space is available. Used in Producer-Consumer patterns.Top LeetCode Problems on QueueHere are the most important LeetCode problems that use Queue — organized from beginner to advanced:Beginner Level:232. Implement Queue using Stacks — implement Queue with two stacks, classic interview question225. Implement Stack using Queues — reverse of 232, implement Stack using Queue933. Number of Recent Calls — sliding window with QueueIntermediate Level:102. Binary Tree Level Order Traversal — BFS on tree, must know107. Binary Tree Level Order Traversal II — same but bottom up994. Rotting Oranges — multi-source BFS on grid1091. Shortest Path in Binary Matrix — BFS shortest path542. 01 Matrix — multi-source BFS, distance to nearest 0127. Word Ladder — BFS on word graph, classicAdvanced Level:239. Sliding Window Maximum — monotonic deque, must know862. Shortest Subarray with Sum at Least K — monotonic deque with prefix sums407. Trapping Rain Water II — 3D BFS with priority queue787. Cheapest Flights Within K Stops — BFS with constraintsQueue Cheat Sheet — Everything at a GlanceCreate a Queue:Queue<Integer> q = new LinkedList<>(); // standardQueue<Integer> q = new ArrayDeque<>(); // faster, preferredDeque<Integer> dq = new ArrayDeque<>(); // double endedPriorityQueue<Integer> pq = new PriorityQueue<>(); // min heapPriorityQueue<Integer> pq = new PriorityQueue<>((a,b) -> b-a); // max heapCore Operations:q.offer(x); // enqueueq.poll(); // dequeueq.peek(); // front elementq.isEmpty(); // check emptyq.size(); // number of elementsDeque Operations:dq.offerFirst(x); // add to frontdq.offerLast(x); // add to reardq.pollFirst(); // remove from frontdq.pollLast(); // remove from reardq.peekFirst(); // view frontdq.peekLast(); // view rearBFS Template:Queue<Integer> queue = new LinkedList<>();queue.offer(start);visited[start] = true;while (!queue.isEmpty()) { int node = queue.poll(); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); } }}ConclusionQueue is one of those data structures that appears simple on the surface but has incredible depth once you start exploring its variations and applications. From the basic FIFO concept to Circular Queues, Deques, Priority Queues, Monotonic Deques, and BFS — each layer adds a new tool to your problem-solving arsenal.Here is the learning path to follow based on everything covered in this guide:Start with understanding FIFO vs LIFO and when each applies. Then get comfortable with Java's Queue interface — offer, poll, peek. Practice the BFS template until it feels automatic. Then move to Level Order Traversal problems. Once BFS clicks, tackle multi-source BFS problems like Rotting Oranges. Finally learn the Monotonic Deque pattern for sliding window problems.Master these and you will handle every Queue problem in any coding interview with confidence.

QueueData StructureJavaBFSDequePriority QueueCircular Queue
Maximum Subarray

Maximum Subarray

LeetCode Problem 53Link of the Problem to try -: LinkGiven an integer array nums, find the subarray with the largest sum, and return its sum.Example 1:Input: nums = [-2,1,-3,4,-1,2,1,-5,4]Output: 6Explanation: The subarray [4,-1,2,1] has the largest sum 6.Example 2:Input: nums = [1]Output: 1Explanation: The subarray [1] has the largest sum 1.Example 3:Input: nums = [5,4,-1,7,8]Output: 23Explanation: The subarray [5,4,-1,7,8] has the largest sum 23.Constraints:1 <= nums.length <= 105-104 <= nums[i] <= 104Follow up: If you have figured out the O(n) solution, try coding another solution using the divide and conquer approach, which is more subtle.My Approach -: I successfully addressed this problem by implementing nested loops to generate the required subarrays. My approach effectively handles the subarray creation logic as specified. Please find the implementation code below.int ans=0;for(int i=0;i<nums.length;i++){int sum =0;for(int j =i;j<nums.length;j++){sum += nums[j];ans = Math.max(sum,ans);}}return ans;Although this solution passed local tests, it encountered a 'Time Limit Exceeded' (TLE) error on LeetCode due to the constraints of the problem. While the nested loop approach is logically sound, further optimization is required to improve its time complexity for larger test cases.Optimized Algorithm (Kadane's Algorithm)This algorithm is highly efficient, solving the problem in a single pass with a time complexity of O(n), not O(1) constant time, as the algorithm must iterate through all n elements of the input array.The core principle is based on the idea that any negative prefix sum acts as a 'debt' that subsequent positive numbers must overcome. The strategy is to always pursue the maximum sum. If the running sum becomes negative at any point, it is reset to zero, effectively 'discarding' the previous negative sequence, and the search for a new maximum sum begins with the next element.Here is a video for more better understandingMy Understanding Steps to solve this QuestionTo solve this problem efficiently in (O(n)) time, I implemented the following steps:Initialization: Define two integer variables: currentSum (initialized to 0) and maxSum (initialized to Integer.MIN_VALUE). currentSum tracks the running total, while maxSum stores the overall maximum subarray sum found so far.Iteration: Traverse the array using a single for loop.Update Running Sum: At each element, add its value to currentSum.Update Maximum: Compare currentSum with maxSum. If currentSum is greater, update maxSum with this new value.Reset Negative Sums: If currentSum drops below zero, reset it to 0. This effectively "discards" a negative prefix that would otherwise reduce the sum of subsequent subarrays.Return Result: After the loop completes, return maxSum as the final result.Here is the Code:int sum = 0;int max = Integer.MIN_VALUE;for(int i=0;i <nums.length;i++){sum +=nums[i];max =Math.max(sum,max);if(sum < 0){sum =0;}}return max;(Note this same solution can also write in another way as well)Here is another way to write this same solution:int sum = 0;int max = Integer.MIN_VALUE;for(int i=0;i <nums.length;i++){// sum +=nums[i];sum = Math.max(sum+nums[i],nums[i]); // Here we are ensuring that sum will alwaays be grestest in all time so that it will not be negative or smaller ass previously we directly putting value in it.max =Math.max(sum,max);// if(sum < 0){ We don't need this as we already ensure that sum will never be negative and always a positive number// sum =0;// }}return max;

LeetCodeArrayMedium
LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

Try the ProblemYou can solve the problem here:https://leetcode.com/problems/find-unique-binary-string/Problem DescriptionYou are given an array nums containing n unique binary strings, where each string has length n.Your task is to return any binary string of length n that does not appear in the array.Important ConditionsEach string consists only of '0' and '1'.Every string in the array is unique.The output must be a binary string of length n.If multiple valid answers exist, any one of them is acceptable.ExamplesExample 1Inputnums = ["01","10"]Output"11"ExplanationPossible binary strings of length 2:00011011Since "01" and "10" are already present, valid answers could be:00 or 11Example 2Inputnums = ["00","01"]Output"11"Another valid output could be:10Example 3Inputnums = ["111","011","001"]Output101Other valid answers include:000010100110Constraintsn == nums.length1 <= n <= 16nums[i].length == nnums[i] consists only of '0' and '1'All strings in nums are uniqueImportant ObservationThe total number of binary strings of length n is:2^nBut the array contains only:n stringsSince 2^n grows very quickly and n ≤ 16, there are many possible binary strings missing from the array. Our goal is simply to construct one of those missing strings.Thinking About the ProblemBefore jumping into coding, it's useful to think about different strategies that could help us generate a binary string that does not appear in the array.Possible Ways to Think About the ProblemWhen approaching this problem, several ideas may come to mind:Generate all possible binary strings of length n and check which one is missing.Store all strings in a HashSet or HashMap and construct a candidate string to verify whether it exists.Manipulate existing strings by flipping bits to create new combinations.Use a mathematical trick that guarantees the new string is different from every string in the list.Each of these approaches leads to a different solution strategy.In this article, we will walk through these approaches and understand how they work.Approach 1: Brute Force – Generate All Binary StringsIdeaThe simplest idea is to generate every possible binary string of length n and check whether it exists in the given array.Since there are:2^n possible binary stringsWe can generate them one by one and return the first string that does not appear in nums.StepsConvert numbers from 0 to (2^n - 1) into binary strings.Pad the binary string with leading zeros so its length becomes n.Check if that string exists in the array.If not, return it.Time ComplexityO(2^n * n)This works because n is at most 16, but it is still not the most elegant approach.Approach 2: HashMap + Bit Flipping (My Approach)IdeaWhile solving this problem, another idea is to store all given binary strings inside a HashMap for quick lookup.Then we can try to construct a new binary string by flipping bits from the existing strings.The intuition is simple:If the current character is '0', change it to '1'.If the current character is '1', change it to '0'.By flipping bits at different positions, we attempt to build a new binary combination.Once the string is constructed, we check whether it already exists in the map.If the generated string does not exist, we return it as our answer.Java Implementation (My Solution)class Solution { public String findDifferentBinaryString(String[] nums) { int len = nums[0].length(); // HashMap to store all given binary strings HashMap<String, Integer> mp = new HashMap<>(); for(int i = 0; i < nums.length; i++){ mp.put(nums[i], i); } int cou = 0; String ans = ""; for(int i = 0; i < nums.length; i++){ if(cou < len){ // Flip the current bit if(nums[i].charAt(cou) == '0'){ ans += '1'; cou++; } else{ ans += '0'; cou++; } }else{ // If generated string does not exist in map if(!mp.containsKey(ans)){ return ans; } // Reset and try building again ans = ""; cou = 0; } } return ans; }}Time ComplexityO(n²)Because we iterate through the array and perform string operations.Space ComplexityO(n)For storing the strings in the HashMap.Approach 3: Cantor’s Diagonalization (Optimal Solution)IdeaA clever mathematical observation allows us to construct a string that must differ from every string in the array.We build a new string such that:The first character differs from the first string.The second character differs from the second string.The third character differs from the third string.And so on.By ensuring that the generated string differs from each string at least at one position, it is guaranteed not to exist in the array.This technique is known as Cantor’s Diagonalization.Java Implementationclass Solution { public String findDifferentBinaryString(String[] nums) { int n = nums.length; StringBuilder result = new StringBuilder(); for(int i = 0; i < n; i++){ // Flip the diagonal bit if(nums[i].charAt(i) == '0'){ result.append('1'); } else{ result.append('0'); } } return result.toString(); }}Time ComplexityO(n)We only traverse the array once.Space ComplexityO(n)For storing the resulting string.Comparison of ApproachesApproachTime ComplexitySpace ComplexityNotesBrute ForceO(2^n * n)O(n)Simple but inefficientHashMap + Bit FlippingO(n²)O(n)Constructive approachCantor DiagonalizationO(n)O(n)Optimal and elegantKey TakeawaysThis problem highlights an interesting concept in algorithm design:Sometimes the best solution is not searching for the answer but constructing one directly.By understanding the structure of the input, we can generate a result that is guaranteed to be unique.ConclusionThe Find Unique Binary String problem can be solved using multiple strategies, ranging from brute force enumeration to clever mathematical construction.While brute force works due to the small constraint (n ≤ 16), more elegant solutions exist. Using hashing or constructive approaches improves efficiency and demonstrates deeper algorithmic thinking.Among all approaches, the Cantor Diagonalization technique provides the most efficient and mathematically guaranteed solution.Understanding problems like this helps strengthen skills in string manipulation, hashing, and constructive algorithms, which are commonly tested in coding interviews.

Binary StringsHashingCantor DiagonalizationLeetCodeMedium
Floor in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Floor in a Sorted Array – Binary Search Explained with Story & Visuals | GeeksforGeeks

Problem StatementPlatform: GeeksforGeeksYou are given a sorted array arr[] and an integer x. Your task is to find the index of the largest element in the array that is less than or equal to x.Return -1 if no such element exists.If multiple elements equal the floor, return the last occurrence.Example:Input: arr = [1, 2, 8, 10, 10, 12, 19], x = 11Output: 4✅ The largest element ≤ 11 is 10. The last occurrence is at index 4.👉 Try this problem here: GeeksforGeeks – Floor in a Sorted ArrayIntuition: What is “Floor” and Why It MattersImagine climbing stairs:You want to step as high as possible without going past a certain height.That step is your floor – the largest number ≤ x.In arrays:The floor of x is the largest number smaller than or equal to x.Because the array is sorted, we can search efficiently with binary search instead of checking every element.This is faster and helps you handle large arrays with millions of elements.Multiple Approaches1️⃣ Linear Search (Easy but Slow)Check each element from left to right. If it’s ≤ x, update the answer.int ans = -1;for(int i = 0; i < arr.length; i++){if(arr[i] <= x){ans = i; // store last occurrence}}return ans;Time Complexity: O(n) – slow for large arraysSpace Complexity: O(1) – constant memory2️⃣ Binary Search (Fast & Efficient)Binary search cuts the search space in half at every step.int ans = -1;int low = 0, high = arr.length - 1;while(low <= high){int mid = low + (high - low)/2;if(arr[mid] == x){ans = mid; // candidate floorlow = mid + 1; // move right for last occurrence} else if(arr[mid] < x){ans = mid; // candidate floorlow = mid + 1;} else {high = mid - 1; // too large, move left}}return ans;Time Complexity: O(log n) – very fastSpace Complexity: O(1) – no extra spaceDry Run / Step-by-StepInput: arr = [1, 2, 8, 10, 10, 12, 19], x = 11Steplowhighmidarr[mid]ansAction1063103arr[mid] < x → move right2465123arr[mid] > x → move left3444104arr[mid] < x → move right454--4low > high → stop, return 4✅ Finds floor = 10 at index 4.Code Explanation in Simple Wordsans = -1 → stores best candidate for floor.Use low and high as binary search boundaries.mid = low + (high - low)/2 → safe midpoint.If arr[mid] <= x, it can be the floor → move right to find last occurrence.If arr[mid] > x, move left → floor is smaller.Loop ends when low > high, return ans.Edge Cases to Rememberx < arr[0] → return -1 (floor doesn’t exist)x ≥ arr[n-1] → return last index (floor is last element)Duplicates → always return last occurrenceStory-Based Visual Example: “Alice’s Book Shelf Adventure” 📚Scenario:Alice is a librarian.Books are arranged by height on a shelf.She has a new book and wants to place it next to the tallest book shorter than or equal to hers.Instead of checking each book, she uses a binary search approach to find the position quickly."Alice is scanning the bookshelf, which represents a sorted array: [1, 2, 8, 10, 10, 12, 19]. She is thinking where to place her new book labeled 11. This step represents the initial step of the floor algorithm, understanding the array elements.""Alice places the book labeled 11 right after the last 10 on the shelf. This demonstrates finding the floor: the largest number ≤ 11 is 10, and the book is positioned next to it, illustrating the last occurrence logic.""From a top view, Alice is scanning all the books. This shows how binary search would conceptually divide the array: she quickly decides which section the book 11 belongs to without checking every book, demonstrating efficient search.""Alice has successfully placed the book 11 at the correct position. The floor of 11 is 10 (index 4). This visual confirms the algorithm’s result: the new element is positioned immediately after the last element ≤ x, exactly as binary search would determine."Why This Problem is ImportantStrengthens binary search skillsTeaches last occurrence / boundary conditions handlingMakes you think algorithmically, not just about numbersStory-based learning improves retention and understandingConclusionLinear search: easy but slow (O(n))Binary search: fast, elegant (O(log n))Multiple dry run steps make it easy to followStory-based images make abstract concepts concrete and memorable

GeeksforGeeksBinary SearchEasy
Understanding HashMap and Hashing Techniques: A Complete Guide

Understanding HashMap and Hashing Techniques: A Complete Guide

What is HashMap?HashMap is a non-linear data structure that lives inside Java's collection framework, alongside other structures like Trees and Graphs (see the generated image above). What makes it special? It uses a clever technique called hashing to store and retrieve data at blazing speeds.Think of hashing as a smart address system. It converts large keys (like long strings or big numbers) into smaller values that work as array indices. This simple trick transforms your data lookup from slow O(n)O(n) or O(log⁡n)O(logn) operations into lightning-fast O(1)O(1) access times!Understanding Hashing MappingsBefore diving deeper, let's understand the four types of hashing mappings:One-to-one mapping — Each key maps to exactly one indexMany-to-one mapping — Multiple keys can map to the same indexOne-to-many mapping — One key maps to multiple indicesMany-to-many mapping — Multiple keys map to multiple indicesHashMap primarily uses one-to-one and many-to-one mapping strategies to achieve its performance goals.The Problem: Why Not Just Use Arrays?Let's say you want to store 7 elements with keys: 1, 5, 23, 57, 234, 678, and 1000.Using direct indexing (where key = array index), you'd need an array of size 1000 just to store 7 items! That leaves 993 empty slots wasting precious memory. Imagine having a parking lot with 1000 spaces for just 7 cars — totally inefficient!This is the exact problem hash functions solve.Hash Functions: The SolutionA hash function takes your key and calculates which array index to use. The beauty? You can now store those same 7 elements in an array of just size 10!The most common hash function uses the modulus operator:hash(key)=keymod array sizehash(key)=keymodarray sizeExample: With array size = 10:Key 57 → Index: 57mod 10=757mod10=7Key 234 → Index: 234mod 10=4234mod10=4Key 1000 → Index: 1000mod 10=01000mod10=0Now you only need an array of size 10 instead of 1000. Problem solved!Three Popular Hash Function TypesModulus MethodThe most widely used technique. Divide the key by array size (or a prime number) and use the remainder as the index.index=keymod sizeindex=keymodsizeMid Square MethodSquare the key value, then extract the middle digits to form your hash value.Example: Key = 57 → 572=3249572=3249 → Extract middle digits "24"Folding HashingDivide the key into equal parts, add them together, then apply modulus.Example: Key = 123456Split into: 12, 34, 56Sum: 12+34+56=10212+34+56=102Apply modulus: 102mod 10=2102mod10=2The Collision ProblemHere's the catch: sometimes two different keys produce the same index. This is called a collision.Example:Key 27 → 27mod 10=727mod10=7Key 57 → 57mod 10=757mod10=7Both keys want index 7! We need smart strategies to handle this.Collision Resolution: Two Main TechniquesTechnique 1: Open Chaining (Separate Chaining)Mapping Type: Many-to-oneWhen a collision happens, create a linked list at that index and store all colliding values in sorted order.Example: Keys 22 and 32 both map to index 2:Index 2: [22] → [32] → nullAdvantage: Simple and works well in practiceDrawback: If too many collisions occur, the linked list becomes long and searching degrades to O(n)O(n). However, Java's collection framework uses highly optimized hash functions that achieve amortized O(1)O(1) time complexity.Technique 2: Closed Addressing (Open Addressing)Mapping Type: One-to-manyInstead of creating a list, find another empty slot in the array. There are three approaches:Linear ProbingWhen collision occurs, check the next index. If it's occupied, keep checking the next one until you find an empty slot.Hash Function:h(x)=xmod sizeh(x)=xmodsizeOn Collision:h(x)=(h(x)+i)mod sizeh(x)=(h(x)+i)modsizewhere i=0,1,2,3,…i=0,1,2,3,…Example: Keys 22 and 32 both map to index 2:Store 22 at index 2Try 32 at index 2 → occupiedCheck index 3 → empty, store 32 thereProblems:Clustering: Keys bunch together in adjacent indices, slowing down searchesDeleted Slot Problem: When you delete a key, it creates an empty slot that breaks the search chainThe Deleted Slot Problem Explained:Keys 22 and 32 are at indices 2 and 3Delete key 22 from index 2Now when searching for key 32, the algorithm reaches index 2 (empty), thinks key 32 doesn't exist, and stops searching!Solution: Mark deleted slots with a special marker (like -1) or perform rehashing after deletions.Quadratic ProbingSame concept as linear probing, but instead of checking consecutive slots, jump in quadratic increments: 1, 4, 9, 16, 25...Hash Function:h(x)=xmod sizeh(x)=xmodsizeOn Collision:h(x)=(h(x)+i2)mod sizeh(x)=(h(x)+i2)modsizewhere i=0,1,2,3,…i=0,1,2,3,…Advantage: Significantly reduces clusteringDrawback: Still suffers from the deleted slot problemDouble HashingThe most sophisticated approach! Uses two different hash functions to create unique probe sequences for each key.First Hash Function:h1(x)=xmod sizeh1(x)=xmodsizeSecond Hash Function:h2(x)=prime−(xmod prime)h2(x)=prime−(xmodprime)On Collision:h(x)=(h1(x)+i×h2(x))mod sizeh(x)=(h1(x)+i×h2(x))modsizewhere i=0,1,2,3,…i=0,1,2,3,…Advantage: Effectively avoids clustering and provides the best performance among probing techniquesLoad Factor: The Performance IndicatorThe load factor tells you how full your hash table is:Load Factor=Number of elementsArray sizeLoad Factor=Array sizeNumber of elementsPerformance GuidelinesLoad Factor ≤ 0.5 → Good performance, fast operations Load Factor > 0.7 → Poor performance, time to rehash ExamplesBad Load Factor:Array size = 10, Elements = 8Load Factor = 8/10=0.88/10=0.8Action: Rehash the array (typically double the size)Good Load Factor:Array size = 200, Elements = 100Load Factor = 100/200=0.5100/200=0.5Action: No changes needed, performing optimallyWhat is Rehashing?When the load factor exceeds the threshold (usually 0.7), the hash table is resized — typically doubled in size. All existing elements are then rehashed and redistributed into the new larger array. This maintains optimal performance as your data grows.Performance SummaryHashMap delivers exceptional average-case performance:Insertion: O(1)O(1)Deletion: O(1)O(1)Search: O(1)O(1)However, in the worst case (many collisions with poor hash functions), operations can degrade to O(n)O(n).The key to maintaining O(1)O(1) performance:Use a good hash functionChoose an appropriate collision resolution strategyMonitor and maintain a healthy load factorRehash when necessaryConclusionHashMap is one of the most powerful and frequently used data structures in programming. By understanding hashing techniques, collision resolution strategies, and load factors, you can write more efficient code and make better design decisions.Whether you're building a cache, implementing a frequency counter, or solving complex algorithm problems, HashMap is your go-to tool for fast data access!

hashmapdata-structureshashingjavaalgorithmscollision-resolutiontime-complexity
Find Peak Element – Binary Search Explained with Java Solution (LeetCode 162)

Find Peak Element – Binary Search Explained with Java Solution (LeetCode 162)

Try the QuestionBefore reading the explanation, try solving the problem yourself:👉 https://leetcode.com/problems/find-peak-element/Attempting the problem first helps develop algorithmic intuition, especially for binary search problems.Problem StatementYou are given a 0-indexed integer array nums.Your task is to find the index of a peak element.What is a Peak Element?A peak element is an element that is strictly greater than its immediate neighbors.For an element nums[i] to be a peak:nums[i] > nums[i-1]nums[i] > nums[i+1]However, the array boundaries have a special rule.Boundary RuleYou can imagine that:nums[-1] = -∞nums[n] = -∞This means:The first element only needs to be greater than the second element.The last element only needs to be greater than the second last element.Constraints1 <= nums.length <= 1000-2^31 <= nums[i] <= 2^31 - 1nums[i] != nums[i + 1] for all valid iImportant implications:Adjacent elements are never equal.The array always has at least one peak.Understanding the Problem with ExamplesExample 1Inputnums = [1,2,3,1]Visualization1 2 3 1^Here:3 > 23 > 1So 3 is a peak element.Output2(index of value 3)Example 2Inputnums = [1,2,1,3,5,6,4]Visualization1 2 1 3 5 6 4^ ^Possible peaks:26Valid outputs:1 or 5Either peak index is acceptable.Key ObservationsImportant facts about peak elements:Every array must contain at least one peak.If numbers keep increasing, the last element will be a peak.If numbers keep decreasing, the first element will be a peak.If the array has ups and downs, peaks exist in the middle.This guarantees a solution exists.Approach 1 — Linear Scan (Brute Force)The simplest approach is to check each element and see whether it satisfies the peak condition.AlgorithmTraverse the array.Check if the current element is greater than neighbors.Return the index if found.Examplenums = [1,2,3,1]Check:1 < 22 < 33 > 2 and 3 > 1 → PeakImplementationpublic int findPeakElement(int[] nums) {for(int i = 0; i < nums.length; i++){boolean left = (i == 0) || nums[i] > nums[i-1];boolean right = (i == nums.length-1) || nums[i] > nums[i+1];if(left && right){return i;}}return -1;}Time ComplexityO(n)Space ComplexityO(1)Although simple, this solution does not satisfy the required O(log n) complexity.Approach 2 — Binary Search Using Peak ConditionsInstead of scanning the entire array, we can use binary search.Key IdeaIf we are at index mid, compare it with the neighbors.Three situations may occur.Case 1 — Peak Foundnums[mid-1] < nums[mid] > nums[mid+1]Then:mid is the peakCase 2 — Increasing Slopenums[mid] < nums[mid+1]Example:1 3 5 7^This means the peak must exist on the right side.Move search to the right.Case 3 — Decreasing Slopenums[mid] < nums[mid-1]Example:7 5 3 1^Peak exists on the left side.Move search to the left.Implementationint i = 1;int j = nums.length - 2;while(i <= j){int mid = i + (j - i)/2;if(nums[mid-1] < nums[mid] && nums[mid] > nums[mid+1]){return mid;}else if(nums[mid] < nums[mid+1]){i = mid + 1;}else{j = mid - 1;}}Time ComplexityO(log n)Space ComplexityO(1)Approach 3 — Optimized Binary Search (Most Elegant Solution)A simpler and more elegant version of binary search exists.Core IntuitionCompare:nums[mid]nums[mid + 1]Two possibilities exist.Case 1 — Increasing Slopenums[mid] < nums[mid+1]Example:1 2 3 4^The peak must exist on the right side.Move:left = mid + 1Case 2 — Decreasing Slopenums[mid] > nums[mid+1]Example:4 3 2 1^Peak exists on the left side including mid.Move:right = midThis gradually narrows down to the peak.Final Optimized Implementationclass Solution {public int findPeakElement(int[] nums) {int i = 0;int j = nums.length - 1;while(i < j){int mid = i + (j - i) / 2;if(nums[mid] < nums[mid + 1]){i = mid + 1;}else{j = mid;}}return i;}}Step-by-Step ExampleArray[1,2,1,3,5,6,4]Iteration 1mid = 3nums[mid] = 3nums[mid+1] = 5Increasing slope → move right.Iteration 2mid = 5nums[mid] = 6nums[mid+1] = 4Decreasing slope → move left.Eventually:peak index = 5Value:6Why This Binary Search WorksThe array behaves like a mountain landscape.Example visualization:/\/ \/ \__If you stand at mid:If the right side is higher, go right.If the left side is higher, go left.This guarantees that you will eventually reach a peak.Time ComplexityO(log n)Each iteration cuts the search space in half.Space ComplexityO(1)Only a few variables are used.Key Takeaways✔ A peak element is greater than its neighbors✔ The array always contains at least one peak✔ Binary search can be applied using slope comparison✔ The optimized solution only compares nums[mid] and nums[mid+1]✔ The final algorithm runs in O(log n) timeFinal ThoughtsThis problem is an excellent example of applying binary search beyond sorted arrays.Instead of searching for a value, binary search is used here to locate a structural property of the array (a peak).Understanding this pattern is extremely valuable because similar logic appears in many interview problems involving:Mountain arraysBitonic arraysOptimization search problemsMastering this approach will significantly improve your binary search problem-solving skills for technical interviews.

LeetCodeBinary SearchMediumJava
Sort a Stack Using Recursion - Java Solution Explained

Sort a Stack Using Recursion - Java Solution Explained

IntroductionSort a Stack is one of those problems that feels impossible at first — you only have access to the top of a stack, you cannot index into it, and the only tools you have are push, pop, and peek. How do you sort something with such limited access?The answer is recursion. And this problem is a perfect example of how recursion can do something elegant that feels like it should require much more complex logic.You can find this problem here — Sort a Stack — GeeksForGeeksThis article explains the complete intuition, both recursive functions in detail, a thorough dry run, all approaches, and complexity analysis.What Is the Problem Really Asking?You have a stack of integers. Sort it so that the smallest element is at the bottom and the largest element is at the top.Input: [41, 3, 32, 2, 11] (11 is at top)Output: [41, 32, 11, 3, 2] (2 is at top, 41 at bottom)Wait — if smallest is at bottom and largest at top, then popping gives you the smallest first. So the stack is sorted in ascending order from top to bottom when you pop.The constraints make this interesting — you can only use stack operations (push, pop, peek, isEmpty). No arrays, no sorting algorithms directly on the data, no random access.Real Life Analogy — Sorting a Stack of BooksImagine a stack of books of different thicknesses on your desk. You want the thinnest book on top and thickest at the bottom. But you can only take from the top.Here is what you do — pick up books one by one from the top and set them aside. Once you have removed enough, place each book back in its correct position. If the book you are placing is thicker than what is currently on top, slide the top books aside temporarily, place the thick book down, then put the others back.That is exactly what the recursive sort does — take elements out one by one, and insert each back into the correct position.The Core Idea — Two Recursive Functions Working TogetherThe solution uses two recursive functions that work hand in hand:sortStack(st) — removes elements one by one until the stack is empty, then inserts each back using insertinsert(st, temp) — inserts a single element into its correct sorted position in an already-sorted stackThink of it like this — sortStack is the manager that empties the stack recursively, and insert is the worker that places each element in the right position on the way back.Function 1: sortStack — The Main Recursive Driverpublic void sortStack(Stack<Integer> st) { // base case — empty or single element is already sorted if (st.empty() || st.size() == 1) return; // Step 1: remove the top element int temp = st.pop(); // Step 2: recursively sort the remaining stack sortStack(st); // Step 3: insert the removed element in correct position insert(st, temp);}How to Think About ThisThe leap of faith here — trust that sortStack correctly sorts whatever is below. After the recursive call, the remaining stack is perfectly sorted. Now your only job is to insert temp (the element you removed) into its correct position in that sorted stack.This is the classic "solve smaller, fix current" recursion pattern. Reduce the problem by one element, trust recursion for the rest, then fix the current element's position.Function 2: insert — Placing an Element in Sorted Positionvoid insert(Stack<Integer> st, int temp) { // base case 1: stack is empty — just push // base case 2: top element is smaller or equal — push on top if (st.empty() || st.peek() <= temp) { st.push(temp); return; } // top element is greater than temp // temp needs to go below the top element int val = st.pop(); // temporarily remove the top insert(st, temp); // try inserting temp deeper st.push(val); // restore the removed element on top}How to Think About ThisYou want to insert temp in sorted order. The stack is already sorted (largest on top). So:If the top is smaller than or equal to temp → temp belongs on top. Push it.If the top is greater than temp → temp needs to go below the top. So pop the top temporarily, try inserting temp deeper, then push the top back.This is the same "pop, recurse deeper, push back" pattern you saw in the Baseball Game problem — when you need to access elements below the top without permanent removal.Complete Solutionclass Solution { // insert temp into its correct position in sorted stack void insert(Stack<Integer> st, int temp) { if (st.empty() || st.peek() <= temp) { st.push(temp); return; } int val = st.pop(); insert(st, temp); st.push(val); } // sort the stack using recursion public void sortStack(Stack<Integer> st) { if (st.empty() || st.size() == 1) return; int temp = st.pop(); // remove top sortStack(st); // sort remaining insert(st, temp); // insert top in correct position }}Detailed Dry Run — st = [3, 2, 1] (1 at top)Let us trace every single call carefully.sortStack calls (going in):sortStack([3, 2, 1]) temp = 1, remaining = [3, 2] call sortStack([3, 2]) temp = 2, remaining = [3] call sortStack([3]) size == 1, return ← base case now insert(st=[3], temp=2) peek=3 > 2, pop val=3 insert(st=[], temp=2) st empty, push 2 ← base case push val=3 back stack is now [2, 3] (3 on top) sortStack([3,2]) done → stack = [2, 3] now insert(st=[2,3], temp=1) peek=3 > 1, pop val=3 insert(st=[2], temp=1) peek=2 > 1, pop val=2 insert(st=[], temp=1) st empty, push 1 ← base case push val=2 back stack = [1, 2] push val=3 back stack = [1, 2, 3] (3 on top)sortStack done → stack = [1, 2, 3]✅ Final stack (3 on top, 1 at bottom): largest at top, smallest at bottom — correctly sorted!Detailed Dry Run — st = [41, 3, 32, 2, 11] (11 at top)Let us trace at a higher level to see the pattern:sortStack calls unwinding (popping phase):sortStack([41, 3, 32, 2, 11]) pop 11, sortStack([41, 3, 32, 2]) pop 2, sortStack([41, 3, 32]) pop 32, sortStack([41, 3]) pop 3, sortStack([41]) base case — return insert([41], 3) → [3, 41] (41 on top) insert([3, 41], 32) → [3, 32, 41] (41 on top) insert([3, 32, 41], 2) → [2, 3, 32, 41] (41 on top) insert([2, 3, 32, 41], 11) → [2, 3, 11, 32, 41] (41 on top)✅ Final: [2, 3, 11, 32, 41] with 41 at top, 2 at bottom — correct!Let us verify the insert([3, 32, 41], 2) step in detail since it involves multiple pops:insert(st=[3, 32, 41], temp=2) peek=41 > 2, pop val=41 insert(st=[3, 32], temp=2) peek=32 > 2, pop val=32 insert(st=[3], temp=2) peek=3 > 2, pop val=3 insert(st=[], temp=2) st empty, push 2 push val=3 back → [2, 3] push val=32 back → [2, 3, 32] push val=41 back → [2, 3, 32, 41]Beautiful chain of pops followed by a chain of pushes — recursion handling what would be very messy iterative logic.Approach 2: Using an Additional Stack (Iterative)If recursion is not allowed or you want an iterative solution, you can use a second stack.public void sortStack(Stack<Integer> st) { Stack<Integer> tempStack = new Stack<>(); while (!st.empty()) { int curr = st.pop(); // move elements from tempStack back to st // until we find the right position for curr while (!tempStack.empty() && tempStack.peek() > curr) { st.push(tempStack.pop()); } tempStack.push(curr); } // move everything back to original stack while (!tempStack.empty()) { st.push(tempStack.pop()); }}How This WorkstempStack is maintained in sorted order (smallest on top). For each element popped from st, we move elements from tempStack back to st until we find the right position, then push. At the end, transfer everything back.Time Complexity: O(n²) — same as recursive approach Space Complexity: O(n) — explicit extra stackApproach 3: Using Collections.sort (Cheat — Not Recommended)public void sortStack(Stack<Integer> st) { List<Integer> list = new ArrayList<>(st); Collections.sort(list); // ascending st.clear(); for (int num : list) { st.push(num); // smallest pushed first = smallest at bottom }}This gives the right answer but completely defeats the purpose of the problem. Never use this in an interview — it shows you do not understand the problem's intent.Time Complexity: O(n log n) Space Complexity: O(n)Approach ComparisonApproachTimeSpaceAllowed in InterviewCode ComplexityRecursive (Two Functions)O(n²)O(n)✅ Best answerMediumIterative (Two Stacks)O(n²)O(n)✅ Good alternativeMediumCollections.sortO(n log n)O(n)❌ Defeats purposeEasyTime and Space Complexity AnalysisRecursive ApproachTime Complexity: O(n²)For each of the n elements popped by sortStack, the insert function may traverse the entire stack to find the correct position — O(n) per insert. Total: O(n) × O(n) = O(n²).This is similar to insertion sort — and in fact, this recursive approach IS essentially insertion sort implemented on a stack using recursion.Space Complexity: O(n)No extra data structure is used. But the call stack holds up to n frames for sortStack and up to n additional frames for insert. In the worst case, total recursion depth is O(n) + O(n) = O(n) space.The Connection to Insertion SortIf you have studied sorting algorithms, this solution will look very familiar. It is Insertion Sort implemented recursively on a stack:sortStack is like the outer loop — take one element at a timeinsert is like the inner loop — find the correct position by comparing and shiftingThe key difference from array-based insertion sort is that "shifting" in an array is done by moving elements right. Here, "shifting" is done by temporarily popping elements off the stack, inserting at the right depth, then pushing back. Recursion handles the "shift" naturally.Common Mistakes to AvoidWrong base case in insert The base case is st.empty() || st.peek() <= temp. Both conditions are needed. Without st.empty(), you call peek() on an empty stack and get an exception. Without st.peek() <= temp, you never stop even when you find the right position.Using < instead of <= in insert Using strictly less than means equal elements get treated as "wrong position" and cause unnecessary recursion. Using <= correctly handles duplicates — equal elements stay in their current relative order.Calling sortStack instead of insert inside insert insert should only call itself recursively. Calling sortStack inside insert re-sorts an already-sorted stack — completely wrong and causes incorrect results.Not pushing val back after the recursive insert call This is the most common bug. After insert(st, temp) places temp in the correct position, you must push val back on top. Forgetting this loses elements from the stack permanently.FAQs — People Also AskQ1. How do you sort a stack without using extra space in Java? Using recursion — the call stack itself serves as temporary storage. sortStack removes elements one by one and insert places each back in its correct sorted position. No explicit extra data structure is needed, but O(n) implicit call stack space is used.Q2. What is the time complexity of sorting a stack using recursion? O(n²) in all cases — for each of the n elements, the insert function traverses up to n elements to find the correct position. This is equivalent to insertion sort applied to a stack.Q3. Can you sort a stack without recursion? Yes — using a second auxiliary stack. Pop elements from the original stack one by one and insert each into the correct position in the second stack by temporarily moving elements back. Transfer everything back to the original stack at the end.Q4. Why does the insert function need to push val back after the recursive call? Because the goal of insert is to place temp in the correct position WITHOUT losing any existing elements. When we pop val temporarily to go deeper, we must restore it after temp has been placed. Not restoring it means permanently losing that element from the stack.Q5. Is sorting a stack asked in coding interviews? Yes, it appears frequently at companies like Amazon, Adobe, and in platforms like GeeksForGeeks and InterviewBit. It tests whether you understand recursion deeply enough to use it as a mechanism for reordering — not just for computation.Similar Problems to Practice NextDelete Middle Element of a Stack — use same recursive pop-recurse-push patternReverse a Stack — very similar recursive approach, great warmup84. Largest Rectangle in Histogram — advanced stack problem946. Validate Stack Sequences — stack simulation150. Evaluate Reverse Polish Notation — stack with operationsConclusionSort a Stack Using Recursion is a beautiful problem that demonstrates the true power of recursion — using the call stack itself as temporary storage to perform operations that would otherwise require extra data structures.The key insight is splitting the problem into two clean responsibilities. sortStack handles one job — peel elements off one by one until the base case, then trigger insert on the way back up. insert handles one job — place a single element into its correct position in an already-sorted stack by temporarily moving larger elements aside.Once you see those two responsibilities clearly, the code almost writes itself. And once this pattern clicks, it directly prepares you for more advanced recursive problems like reversing a stack, deleting the middle element, and even understanding how recursive backtracking works at a deeper level.

StackRecursionJavaGeeksForGeeksMedium
Climbing Stairs Problem (LeetCode 70) – Complete Guide with Optimized Solutions

Climbing Stairs Problem (LeetCode 70) – Complete Guide with Optimized Solutions

IntroductionThe Climbing Stairs problem is one of the most commonly asked coding interview questions for beginners. It is a perfect example to understand recursion, memoization, and dynamic programming (DP).In this article, we will break down the problem step by step and explore multiple approaches—from brute force recursion to an optimized space-efficient solution.Link of Problem: LeetCode – Climbing StairsProblem StatementYou are climbing a staircase that has n steps.Each time, you can either climb 1 step or 2 steps.The goal is to calculate the total number of distinct ways to reach the top.ExampleInput:n = 2Output:2Explanation:1 + 12Input:n = 3Output:3Explanation:1 + 1 + 11 + 22 + 1Key InsightTo reach step n, there are only two possibilities:From step n-1 (taking 1 step)From step n-2 (taking 2 steps)So, the recurrence relation becomes:ways(n) = ways(n-1) + ways(n-2)This is identical to the Fibonacci sequence, making this problem a classic DP question.Approach 1: Recursive Solution (Brute Force)IdeaBreak the problem into smaller subproblems:Count ways to reach n-1Count ways to reach n-2Add both resultsCodeclass Solution { public int climbStairs(int n) { if(n == 0) return 1; if(n < 0) return 0; return climbStairs(n-1) + climbStairs(n-2); }}ComplexityTime Complexity: O(2^n)Space Complexity: O(n)DrawbackThis solution recalculates the same subproblems multiple times, leading to Time Limit Exceeded (TLE) for larger values.Approach 2: Recursion with Memoization (Top-Down DP)IdeaTo optimize recursion, store already computed results using a HashMap.Avoid repeated calculationsConvert exponential time into linear timeCodeimport java.util.HashMap;class Solution { private HashMap<Integer, Integer> memo = new HashMap<>(); public int climbStairs(int n) { if(n == 0) return 1; if(n < 0) return 0; if(memo.containsKey(n)) { return memo.get(n); } int result = climbStairs(n-1) + climbStairs(n-2); memo.put(n, result); return result; }}ComplexityTime Complexity: O(n)Space Complexity: O(n)Why It WorksMemoization ensures each subproblem is solved only once, making recursion efficient and practical.Approach 3: Dynamic Programming (Bottom-Up)IdeaInstead of recursion, build the solution iteratively:Use an array dp[]Store results for each stepBuild from smaller values to larger onesCodeclass Solution { public int climbStairs(int n) { if(n == 1) return n; if(n == 2) return n; if(n == 3) return n; int dp[] = new int[n+1]; dp[0] = 0; dp[1] = 1; dp[2] = 2; for(int i = 3; i <= n; i++) { dp[i] = dp[i-1] + dp[i-2]; } return dp[n]; }}ComplexityTime Complexity: O(n)Space Complexity: O(n)Approach 4: Optimal Solution (Space Optimized)IdeaWe only need the last two values instead of the whole array.Codeclass Solution { public int climbStairs(int n) { if(n <= 2) return n; int prev1 = 1; int prev2 = 2; for(int i = 3; i <= n; i++) { int current = prev1 + prev2; prev1 = prev2; prev2 = current; } return prev2; }}ComplexityTime Complexity: O(n)Space Complexity: O(1)Key TakeawaysThe problem follows a Fibonacci-like patternBrute force recursion is simple but inefficientMemoization converts recursion into an efficient solutionDynamic programming avoids recursion completelySpace optimization reduces memory usage to constant spaceWhen This Problem Is AskedThis question is frequently asked in:Coding interviews (product-based companies)Data Structures & Algorithms examsOnline coding platformsIt evaluates:Problem-solving abilityUnderstanding of recursionOptimization skillsConclusionThe Climbing Stairs problem is a foundational example for learning dynamic programming. Starting with recursion and improving it using memoization and iterative DP demonstrates how to optimize algorithms effectively.Understanding this pattern will help solve many similar problems related to sequences and decision-making.Frequently Asked Questions (FAQs)1. Is this problem related to Fibonacci?Yes, the recurrence relation is exactly the same as the Fibonacci sequence.2. Why does recursion fail for large inputs?Because it recalculates the same values repeatedly, leading to exponential time complexity.3. What is the best approach?The space-optimized approach is the most efficient with O(n) time and O(1) space.

EasyJavaLeetCodeRecursionMemoization
Stack Data Structure in Java: The Complete In-Depth Guide

Stack Data Structure in Java: The Complete In-Depth Guide

1. What Is a Stack?A Stack is a linear data structure that stores elements in a sequential order, but with one strict rule — you can only insert or remove elements from one end, called the top.It is one of the simplest yet most powerful data structures in computer science. Its strength comes from its constraint. Because everything happens at one end, the behavior of a stack is completely predictable.The formal definition: A Stack is a linear data structure that follows the Last In, First Out (LIFO) principle — the element inserted last is the first one to be removed.Here is what a stack looks like visually: ┌──────────┐ │ 50 │ ← TOP (last inserted, first removed) ├──────────┤ │ 40 │ ├──────────┤ │ 30 │ ├──────────┤ │ 20 │ ├──────────┤ │ 10 │ ← BOTTOM (first inserted, last removed) └──────────┘When you push 60 onto this stack, it goes on top. When you pop, 60 comes out first. That is LIFO.2. Real-World AnalogiesBefore writing a single line of code, it helps to see stacks in the real world. These analogies will make the concept permanently stick.A Pile of Plates In a cafeteria, clean plates are stacked on top of each other. You always pick the top plate. You always place a new plate on top. You never reach into the middle. This is a stack.Browser Back Button Every time you visit a new webpage, it gets pushed onto a history stack. When you press the Back button, the browser pops the most recent page off the stack and takes you there. The page you visited first is at the bottom — you only reach it after going back through everything else.Undo Feature in Text Editors When you type in a document and press Ctrl+Z, the most recent action is undone first. That is because every action you perform is pushed onto a stack. Undo simply pops from that stack.Call Stack in Programming When a function calls another function, the current function's state is pushed onto the call stack. When the inner function finishes, it is popped off and execution returns to the outer function. This is the literal stack your programs run on.A Stack of Books Put five books on a table, one on top of another. You can only take the top book without knocking the pile over. That is a stack.3. The LIFO Principle ExplainedLIFO stands for Last In, First Out.It means whatever you put in last is the first thing to come out. This is the exact opposite of a Queue (which is FIFO — First In, First Out).Let us trace through an example step by step:Start: Stack is empty → []Push 10 → [10] (10 is at the top)Push 20 → [10, 20] (20 is at the top)Push 30 → [10, 20, 30] (30 is at the top)Pop → returns 30 (30 was last in, first out) Stack: [10, 20]Pop → returns 20 Stack: [10]Peek → returns 10 (just looks, does not remove) Stack: [10]Pop → returns 10 Stack: [] (stack is now empty)Every single operation happens only at the top. The bottom of the stack is never directly accessible.4. Stack Operations & Time ComplexityA stack supports the following core operations:OperationDescriptionTime Complexitypush(x)Insert element x onto the top of the stackO(1)pop()Remove and return the top elementO(1)peek() / top()Return the top element without removing itO(1)isEmpty()Check if the stack has no elementsO(1)isFull()Check if the stack has reached its capacity (Array only)O(1)size()Return the number of elements in the stackO(1)search(x)Find position of element from top (Java built-in only)O(n)All primary stack operations — push, pop, peek, isEmpty — run in O(1) constant time. This is what makes the stack so efficient. It does not matter whether the stack has 10 elements or 10 million — these operations are always instant.Space complexity for a stack holding n elements is O(n).5. Implementation 1 — Using a Static ArrayThis is the most fundamental way to implement a stack. We use a fixed-size array and a variable called top to track where the top of the stack currently is.How it works:top starts at -1 (stack is empty)On push: increment top, then place the element at arr[top]On pop: return arr[top], then decrement topOn peek: return arr[top] without changing it// StackUsingArray.javapublic class StackUsingArray { private int[] arr; private int top; private int capacity; // Constructor — initialize with a fixed capacity public StackUsingArray(int capacity) { this.capacity = capacity; arr = new int[capacity]; top = -1; } // Push: add element to the top public void push(int value) { if (isFull()) { System.out.println("Stack Overflow! Cannot push " + value); return; } arr[++top] = value; System.out.println("Pushed: " + value); } // Pop: remove and return top element public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } return arr[top--]; } // Peek: view the top element without removing public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return arr[top]; } // Check if stack is empty public boolean isEmpty() { return top == -1; } // Check if stack is full public boolean isFull() { return top == capacity - 1; } // Return current size public int size() { return top + 1; } // Display all elements public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); for (int i = top; i >= 0; i--) { System.out.print(arr[i] + " "); } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingArray stack = new StackUsingArray(5); stack.push(10); stack.push(20); stack.push(30); stack.push(40); stack.push(50); stack.push(60); // This will trigger Stack Overflow stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 10Pushed: 20Pushed: 30Pushed: 40Pushed: 50Stack Overflow! Cannot push 60Stack (top → bottom): 50 40 30 20 10Peek: 50Pop: 50Pop: 40Stack (top → bottom): 30 20 10Size: 3Key Points about Array Implementation:Fixed size — you must declare capacity upfrontVery fast — direct array index accessStack Overflow is possible if capacity is exceededMemory is pre-allocated even if stack is not full6. Implementation 2 — Using an ArrayListAn ArrayList-based stack removes the fixed-size limitation. The ArrayList grows dynamically, so you never have to worry about stack overflow due to capacity.How it works:The end of the ArrayList acts as the topadd() is used for pushremove(size - 1) is used for popget(size - 1) is used for peek// StackUsingArrayList.javaimport java.util.ArrayList;public class StackUsingArrayList { private ArrayList<Integer> list; // Constructor public StackUsingArrayList() { list = new ArrayList<>(); } // Push: add to the end (which is our top) public void push(int value) { list.add(value); System.out.println("Pushed: " + value); } // Pop: remove and return the last element public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } int top = list.get(list.size() - 1); list.remove(list.size() - 1); return top; } // Peek: view the last element public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return list.get(list.size() - 1); } // Check if stack is empty public boolean isEmpty() { return list.isEmpty(); } // Return size public int size() { return list.size(); } // Display elements from top to bottom public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); for (int i = list.size() - 1; i >= 0; i--) { System.out.print(list.get(i) + " "); } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingArrayList stack = new StackUsingArrayList(); stack.push(5); stack.push(15); stack.push(25); stack.push(35); stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Is Empty: " + stack.isEmpty()); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 5Pushed: 15Pushed: 25Pushed: 35Stack (top → bottom): 35 25 15 5Peek: 35Pop: 35Pop: 25Stack (top → bottom): 15 5Is Empty: falseSize: 2Key Points about ArrayList Implementation:Dynamic size — grows automatically as neededNo overflow riskSlight overhead compared to raw array due to ArrayList internalsExcellent for most practical use cases7. Implementation 3 — Using a LinkedListA LinkedList-based stack is the most memory-efficient approach when you do not know the stack size in advance. Each element (node) holds data and a pointer to the next node. The head of the LinkedList acts as the top of the stack.How it works:Each node stores a value and a reference to the node below itPush creates a new node and makes it the new headPop removes the head node and returns its valuePeek returns the head node's value without removing it// StackUsingLinkedList.javapublic class StackUsingLinkedList { // Inner Node class private static class Node { int data; Node next; Node(int data) { this.data = data; this.next = null; } } private Node top; // Head of the linked list = top of stack private int size; // Constructor public StackUsingLinkedList() { top = null; size = 0; } // Push: create new node and link it to top public void push(int value) { Node newNode = new Node(value); newNode.next = top; // new node points to current top top = newNode; // new node becomes the new top size++; System.out.println("Pushed: " + value); } // Pop: remove and return top node's data public int pop() { if (isEmpty()) { System.out.println("Stack Underflow! Stack is empty."); return -1; } int value = top.data; top = top.next; // move top pointer to next node size--; return value; } // Peek: return top node's data without removing public int peek() { if (isEmpty()) { System.out.println("Stack is empty."); return -1; } return top.data; } // Check if empty public boolean isEmpty() { return top == null; } // Return size public int size() { return size; } // Display elements from top to bottom public void display() { if (isEmpty()) { System.out.println("Stack is empty."); return; } System.out.print("Stack (top → bottom): "); Node current = top; while (current != null) { System.out.print(current.data + " "); current = current.next; } System.out.println(); } // Main method to test public static void main(String[] args) { StackUsingLinkedList stack = new StackUsingLinkedList(); stack.push(100); stack.push(200); stack.push(300); stack.push(400); stack.display(); System.out.println("Peek: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Pop: " + stack.pop()); stack.display(); System.out.println("Size: " + stack.size()); }}```**Output:**```Pushed: 100Pushed: 200Pushed: 300Pushed: 400Stack (top → bottom): 400 300 200 100Peek: 400Pop: 400Pop: 300Stack (top → bottom): 200 100Size: 2Key Points about LinkedList Implementation:Truly dynamic — each node allocated only when neededNo wasted memory from pre-allocationSlightly more memory per element (each node carries a pointer)Ideal for stacks where size is completely unknown8. Java's Built-in Stack ClassJava provides a ready-made Stack class inside java.util. It extends Vector and is thread-safe by default.// JavaBuiltinStack.javaimport java.util.Stack;public class JavaBuiltinStack { public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); // Push elements stack.push(10); stack.push(20); stack.push(30); stack.push(40); System.out.println("Stack: " + stack); // Peek — look at top without removing System.out.println("Peek: " + stack.peek()); // Pop — remove top System.out.println("Pop: " + stack.pop()); System.out.println("After pop: " + stack); // Search — returns 1-based position from top System.out.println("Search 20: position " + stack.search(20)); // isEmpty System.out.println("Is Empty: " + stack.isEmpty()); // Size System.out.println("Size: " + stack.size()); }}```**Output:**```Stack: [10, 20, 30, 40]Peek: 40Pop: 40After pop: [10, 20, 30]Search 20: position 2Is Empty: falseSize: 3Important Note: In modern Java development, it is often recommended to use Deque (specifically ArrayDeque) instead of Stack for better performance, since Stack is synchronized and carries the overhead of Vector.// Using ArrayDeque as a stack (modern preferred approach)import java.util.ArrayDeque;import java.util.Deque;public class ModernStack { public static void main(String[] args) { Deque<Integer> stack = new ArrayDeque<>(); stack.push(10); // pushes to front stack.push(20); stack.push(30); System.out.println("Top: " + stack.peek()); System.out.println("Pop: " + stack.pop()); System.out.println("Stack: " + stack); }}9. Comparison of All ImplementationsFeatureArrayArrayListLinkedListJava StackArrayDequeSizeFixedDynamicDynamicDynamicDynamicStack Overflow RiskYesNoNoNoNoMemory UsagePre-allocatedAuto-growsPer-node overheadAuto-growsAuto-growsPush TimeO(1)O(1) amortizedO(1)O(1)O(1)Pop TimeO(1)O(1)O(1)O(1)O(1)Peek TimeO(1)O(1)O(1)O(1)O(1)Thread SafeNoNoNoYesNoBest ForKnown size, max speedGeneral useUnknown/huge sizeLegacy codeModern Java10. Advantages & DisadvantagesAdvantagesAdvantageExplanationSimple to implementVery few rules and operations to worry aboutO(1) operationsPush, pop, and peek are all constant timeMemory efficientNo extra pointers needed (array-based)Supports recursionThe call stack is itself a stackEasy undo/redoNatural fit for reversible action trackingBacktrackingPerfectly suited for maze, puzzle, and game solvingExpression evaluationPowers compilers and calculatorsDisadvantagesDisadvantageExplanationLimited accessCannot access elements in the middle directlyFixed size (array)Array-based stacks overflow if size is exceededNo random accessYou cannot do stack[2] — only top is accessibleMemory waste (array)Pre-allocated array wastes space if underusedNot suitable for all problemsMany problems need queues, trees, or graphs insteadStack overflow in recursionVery deep recursion can overflow the JVM call stack11. Real-World Use Cases of StackUnderstanding when to use a stack is just as important as knowing how to implement one. Here is where stacks show up in real software:Function Call Management (Call Stack) Every time your Java program calls a method, the JVM pushes that method's frame onto the call stack. When the method returns, the frame is popped. This is why you see "StackOverflowError" when you write infinite recursion.Undo and Redo Operations Text editors, image editors (Photoshop), and IDEs use two stacks — one for undo history and one for redo history. Every action pushes onto the undo stack. Ctrl+Z pops from it and pushes to the redo stack.Browser Navigation Your browser maintains a back-stack and a forward-stack. Visiting a new page pushes to the back-stack. Pressing Back pops from it and pushes to the forward-stack.Expression Evaluation and Conversion Compilers use stacks to evaluate arithmetic expressions and convert between infix, prefix, and postfix notations. For example: 3 + 4 * 2 must be evaluated considering operator precedence — this is done with a stack.Balanced Parentheses Checking Linters, compilers, and IDEs use stacks to check if brackets are balanced: {[()]} is valid, {[(])} is not.Backtracking Algorithms Maze solving, N-Queens, Sudoku solvers, and depth-first search all use stacks (explicitly or via recursion) to backtrack to previous states when a path fails.Syntax Parsing Compilers parse source code using stacks to match opening and closing constructs like if/else, try/catch, { and }.12. Practice Problems with Full SolutionsHere is where things get really interesting. These problems will sharpen your stack intuition and prepare you for coding interviews.Problem 1 — Reverse a String Using a StackDifficulty: EasyProblem: Write a Java program to reverse a string using a Stack.Approach: Push every character of the string onto a stack, then pop them all. Since LIFO reverses the order, the characters come out reversed.// ReverseString.javaimport java.util.Stack;public class ReverseString { public static String reverse(String str) { Stack<Character> stack = new Stack<>(); // Push all characters for (char c : str.toCharArray()) { stack.push(c); } // Pop all characters to build reversed string StringBuilder reversed = new StringBuilder(); while (!stack.isEmpty()) { reversed.append(stack.pop()); } return reversed.toString(); } public static void main(String[] args) { System.out.println(reverse("hello")); // olleh System.out.println(reverse("java")); // avaj System.out.println(reverse("racecar")); // racecar (palindrome) System.out.println(reverse("datastructure")); // erutcurtasatad }}Problem 2 — Check Balanced ParenthesesDifficulty: Easy–MediumProblem: Given a string containing (, ), {, }, [, ], determine if the brackets are balanced.Approach: Push every opening bracket onto the stack. When you see a closing bracket, check if it matches the top of the stack. If it does, pop. If it does not, the string is unbalanced.// BalancedParentheses.javaimport java.util.Stack;public class BalancedParentheses { public static boolean isBalanced(String expr) { Stack<Character> stack = new Stack<>(); for (char c : expr.toCharArray()) { // Push all opening brackets if (c == '(' || c == '{' || c == '[') { stack.push(c); } // For closing brackets, check the top of stack else if (c == ')' || c == '}' || c == ']') { if (stack.isEmpty()) return false; char top = stack.pop(); if (c == ')' && top != '(') return false; if (c == '}' && top != '{') return false; if (c == ']' && top != '[') return false; } } // Stack must be empty at the end for a balanced expression return stack.isEmpty(); } public static void main(String[] args) { System.out.println(isBalanced("{[()]}")); // true System.out.println(isBalanced("{[(])}")); // false System.out.println(isBalanced("((()))")); // true System.out.println(isBalanced("{]")); // false System.out.println(isBalanced("")); // true (empty is balanced) }}Problem 3 — Reverse a Stack (Without Extra Data Structure)Difficulty: Medium–HardProblem: Reverse all elements of a stack using only recursion — no array or extra stack allowed.Approach: This is a classic recursion problem. You need two recursive functions:insertAtBottom(stack, item) — inserts an element at the very bottom of the stackreverseStack(stack) — pops all elements, reverses, and uses insertAtBottom to rebuild// ReverseStack.javaimport java.util.Stack;public class ReverseStack { // Insert an element at the bottom of the stack public static void insertAtBottom(Stack<Integer> stack, int item) { if (stack.isEmpty()) { stack.push(item); return; } int top = stack.pop(); insertAtBottom(stack, item); stack.push(top); } // Reverse the stack using insertAtBottom public static void reverseStack(Stack<Integer> stack) { if (stack.isEmpty()) return; int top = stack.pop(); reverseStack(stack); // reverse the remaining stack insertAtBottom(stack, top); // insert popped element at bottom } public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); stack.push(1); stack.push(2); stack.push(3); stack.push(4); stack.push(5); System.out.println("Before: " + stack); // [1, 2, 3, 4, 5] reverseStack(stack); System.out.println("After: " + stack); // [5, 4, 3, 2, 1] }}Problem 4 — Evaluate a Postfix ExpressionDifficulty: MediumProblem: Evaluate a postfix (Reverse Polish Notation) expression. Example: "2 3 4 * +" should return 14 because it is 2 + (3 * 4).Approach: Scan left to right. If you see a number, push it. If you see an operator, pop two numbers, apply the operator, and push the result.// PostfixEvaluation.javaimport java.util.Stack;public class PostfixEvaluation { public static int evaluate(String expression) { Stack<Integer> stack = new Stack<>(); String[] tokens = expression.split(" "); for (String token : tokens) { // If it's a number, push it if (token.matches("-?\\d+")) { stack.push(Integer.parseInt(token)); } // If it's an operator, pop two and apply else { int b = stack.pop(); // second operand int a = stack.pop(); // first operand switch (token) { case "+": stack.push(a + b); break; case "-": stack.push(a - b); break; case "*": stack.push(a * b); break; case "/": stack.push(a / b); break; } } } return stack.pop(); } public static void main(String[] args) { System.out.println(evaluate("2 3 4 * +")); // 14 → 2 + (3*4) System.out.println(evaluate("5 1 2 + 4 * + 3 -")); // 14 → 5+((1+2)*4)-3 System.out.println(evaluate("3 4 +")); // 7 }}Problem 5 — Next Greater ElementDifficulty: MediumProblem: For each element in an array, find the next greater element to its right. If none exists, output -1.Example: Input: [4, 5, 2, 10, 8] → Output: [5, 10, 10, -1, -1]Approach: Iterate right to left. Maintain a stack of candidates. For each element, pop all stack elements that are smaller than or equal to it — they can never be the answer for any element to the left. The top of the stack (if not empty) is the next greater element.// NextGreaterElement.javaimport java.util.Stack;import java.util.Arrays;public class NextGreaterElement { public static int[] nextGreater(int[] arr) { int n = arr.length; int[] result = new int[n]; Stack<Integer> stack = new Stack<>(); // stores elements, not indices // Traverse from right to left for (int i = n - 1; i >= 0; i--) { // Pop elements smaller than or equal to current while (!stack.isEmpty() && stack.peek() <= arr[i]) { stack.pop(); } // Next greater element result[i] = stack.isEmpty() ? -1 : stack.peek(); // Push current element for future comparisons stack.push(arr[i]); } return result; } public static void main(String[] args) { int[] arr1 = {4, 5, 2, 10, 8}; System.out.println(Arrays.toString(nextGreater(arr1))); // [5, 10, 10, -1, -1] int[] arr2 = {1, 3, 2, 4}; System.out.println(Arrays.toString(nextGreater(arr2))); // [3, 4, 4, -1] int[] arr3 = {5, 4, 3, 2, 1}; System.out.println(Arrays.toString(nextGreater(arr3))); // [-1, -1, -1, -1, -1] }}Problem 6 — Sort a Stack Using RecursionDifficulty: HardProblem: Sort a stack in ascending order (smallest on top) using only recursion — no loops, no extra data structure.// SortStack.javaimport java.util.Stack;public class SortStack { // Insert element in correct sorted position public static void sortedInsert(Stack<Integer> stack, int item) { if (stack.isEmpty() || item > stack.peek()) { stack.push(item); return; } int top = stack.pop(); sortedInsert(stack, item); stack.push(top); } // Sort the stack public static void sortStack(Stack<Integer> stack) { if (stack.isEmpty()) return; int top = stack.pop(); sortStack(stack); // sort remaining sortedInsert(stack, top); // insert top in sorted position } public static void main(String[] args) { Stack<Integer> stack = new Stack<>(); stack.push(34); stack.push(3); stack.push(31); stack.push(98); stack.push(92); stack.push(23); System.out.println("Before sort: " + stack); sortStack(stack); System.out.println("After sort: " + stack); // smallest on top }}13. Summary & Key TakeawaysA stack is a simple, elegant, and powerful data structure. Here is everything in one place:What it is: A linear data structure that follows LIFO — Last In, First Out.Core operations: push (add to top), pop (remove from top), peek (view top), isEmpty — all in O(1) time.Three ways to implement it in Java:Array-based: fast, fixed size, risk of overflowArrayList-based: dynamic, easy, slightly more overheadLinkedList-based: truly dynamic, memory-efficient per-element, best for unknown sizesWhen to use it:Undo/redo systemsBrowser navigationBalancing brackets and parenthesesEvaluating mathematical expressionsBacktracking problemsManaging recursive function callsDepth-first searchWhen NOT to use it:When you need random access to elementsWhen insertion/deletion is needed from both ends (use Deque)When you need to search efficiently (use HashMap or BST)Modern Java recommendation: Prefer ArrayDeque over the legacy Stack class for non-thread-safe scenarios. Use Stack only when you need synchronized access.The stack is one of those data structures that once you truly understand, you start seeing it everywhere — in your browser, in your IDE, in recursive algorithms, and deep within the operating system itself.This article covered everything from the fundamentals of the Stack data structure to multiple Java implementations, time complexity analysis, real-world applications, and six practice problems of increasing difficulty. Bookmark it as a reference and revisit the practice problems regularly — they are the real test of your understanding.

DataStructuresJavaStackDataStructureLIFO
LeetCode 3783 Mirror Distance of an Integer | Java Solution Explained

LeetCode 3783 Mirror Distance of an Integer | Java Solution Explained

IntroductionSome problems test complex algorithms, while others focus on fundamental concepts done right.LeetCode 3783 – Mirror Distance of an Integer falls into the second category.This problem is simple yet important because it builds understanding of:Digit manipulationReversing numbersMathematical operationsIn this article, we’ll break down the problem in a clean and intuitive way, along with an optimized Java solution.🔗 Problem LinkLeetCode: Mirror Distance of an IntegerProblem StatementYou are given an integer n.The mirror distance is defined as:| n - reverse(n) |Where:reverse(n) = number formed by reversing digits of n|x| = absolute value👉 Return the mirror distance.ExamplesExample 1Input:n = 25Output:27Explanation:reverse(25) = 52|25 - 52| = 27Example 2Input:n = 10Output:9Explanation:reverse(10) = 1|10 - 1| = 9Example 3Input:n = 7Output:0Key InsightThe problem consists of two simple steps:1. Reverse the number2. Take absolute differenceIntuitionLet’s take an example:n = 120Step 1: Reverse digits120 → 021 → 21👉 Leading zeros are ignored automatically.Step 2: Compute difference|120 - 21| = 99ApproachStep-by-StepExtract digits using % 10Build reversed numberUse Math.abs() for final resultJava Codeclass Solution { // Function to reverse a number public int reverse(int k) { int rev = 0; while (k != 0) { int dig = k % 10; // get last digit k = k / 10; // remove last digit rev = rev * 10 + dig; // build reversed number } return rev; } public int mirrorDistance(int n) { // Calculate mirror distance return Math.abs(n - reverse(n)); }}Dry RunInput:n = 25Execution:Reverse → 52Difference → |25 - 52| = 27Complexity AnalysisTime ComplexityReversing number → O(d)(d = number of digits)👉 Overall: O(log n)Space Complexity👉 O(1) (no extra space used)Why This WorksDigit extraction ensures correct reversalLeading zeros automatically removedAbsolute difference ensures positive resultEdge Cases to ConsiderSingle digit → result = 0Numbers ending with zero (e.g., 10 → 1)Large numbers (up to 10⁹)Key TakeawaysSimple math problems can test core logicDigit manipulation is a must-know skillAlways handle leading zeros carefullyUse built-in functions like Math.abs() effectivelyReal-World RelevanceConcepts used here are helpful in:Number transformationsPalindrome problemsReverse integer problemsMathematical algorithmsConclusionThe Mirror Distance of an Integer problem is a great example of combining basic operations to form a meaningful solution.While simple, it reinforces important programming fundamentals that are widely used in more complex problems.Frequently Asked Questions (FAQs)1. What happens to leading zeros in reverse?They are automatically removed when stored as an integer.2. Can this be solved using strings?Yes, but integer-based approach is more efficient.3. What is the best approach?Using arithmetic operations (% and /) is optimal.

EasyArrayReverse NumberLeetCodeJava
LeetCode 39: Combination Sum – Java Backtracking Solution with Dry Run & Complexity

LeetCode 39: Combination Sum – Java Backtracking Solution with Dry Run & Complexity

IntroductionIf you are preparing for coding interviews or improving your Data Structures and Algorithms skills, LeetCode 39 Combination Sum is one of the most important backtracking problems to learn. This problem helps you understand how recursion explores multiple possibilities and how combinations are generated efficiently. It is a foundational problem that builds strong problem-solving skills and prepares you for many advanced recursion and backtracking questions.Why Should You Solve This Problem?Combination Sum is not just another coding question — it teaches you how to think recursively and break a complex problem into smaller decisions. By solving it, you learn how to manage recursive paths, avoid duplicate combinations, and build interview-level backtracking intuition. Once you understand this pattern, problems like subsets, permutations, N-Queens, and Sudoku Solver become much easier to approach.LeetCode Problem LinkProblem Name: Combination SumProblem Link: Combination SumProblem StatementGiven an array of distinct integers called candidates and a target integer target, you need to return all unique combinations where the chosen numbers sum to the target.Important rules:You can use the same number unlimited times.Only unique combinations should be returned.Order of combinations does not matter.ExampleExample 1Input:candidates = [2,3,6,7]target = 7Output:[[2,2,3],[7]]Explanation2 + 2 + 3 = 77 itself equals targetUnderstanding the Problem in Simple WordsWe are given some numbers.We need to:Pick numbers from the arrayAdd them togetherReach the target sumUse numbers multiple times if neededAvoid duplicate combinationsThis problem belongs to the Backtracking + Recursion category.Real-Life AnalogyImagine you have coins of different values.You want to make an exact payment.You can reuse coins multiple times.You need to find every possible valid coin combination.This is exactly what Combination Sum does.Intuition Behind the SolutionAt every index, we have two choices:Pick the current numberSkip the current numberSince numbers can be reused unlimited times, when we pick a number, we stay at the same index.This creates a recursion tree.We continue until:Target becomes 0 → valid answerTarget becomes negative → invalid pathArray ends → stop recursionWhy Backtracking Works HereBacktracking helps us:Explore all possible combinationsUndo previous decisionsTry another pathIt is useful whenever we need:All combinationsAll subsetsPath explorationRecursive searchingApproach 1: Backtracking Using Pick and SkipCore IdeaAt every element:Either take itOr move to next elementJava Code (Pick and Skip Method)class Solution {List<List<Integer>> result = new ArrayList<>();public void solve(int[] candidates, int index, int target, List<Integer> current) {if (target == 0) {result.add(new ArrayList<>(current));return;}if (index == candidates.length) {return;}if (candidates[index] <= target) {current.add(candidates[index]);solve(candidates, index, target - candidates[index], current);current.remove(current.size() - 1);}solve(candidates, index + 1, target, current);}public List<List<Integer>> combinationSum(int[] candidates, int target) {solve(candidates, 0, target, new ArrayList<>());return result;}}Approach 2: Backtracking Using Loop (Optimized)This is the cleaner and more optimized version.Your code belongs to this category.Java Code (Loop-Based Backtracking)class Solution {List<List<Integer>> result = new ArrayList<>();public void solve(int[] arr, int index, int target, List<Integer> current) {if (target == 0) {result.add(new ArrayList<>(current));return;}if (index == arr.length) {return;}for (int i = index; i < arr.length; i++) {if (arr[i] > target) {continue;}current.add(arr[i]);solve(arr, i, target - arr[i], current);current.remove(current.size() - 1);}}public List<List<Integer>> combinationSum(int[] candidates, int target) {solve(candidates, 0, target, new ArrayList<>());return result;}}Dry Run of the AlgorithmInputcandidates = [2,3,6,7]target = 7Step-by-Step ExecutionStart:solve([2,3,6,7], index=0, target=7, [])Pick 2[2]target = 5Pick 2 again:[2,2]target = 3Pick 2 again:[2,2,2]target = 1No valid choice possible.Backtrack.Try 3[2,2,3]target = 0Valid answer found.Add:[2,2,3]Try 7[7]target = 0Valid answer found.Add:[7]Final Output[[2,2,3],[7]]Recursion Tree Visualization[]/ | | \2 3 6 7/2/2/3Every branch explores a different combination.Time Complexity AnalysisTime ComplexityO(2^Target)More accurately:O(N^(Target/minValue))Where:N = Number of candidatesTarget = Required sumReason:Every number can be picked multiple times.This creates many recursive branches.Space ComplexityO(Target)Reason:Recursion stack stores elements.Maximum recursion depth depends on target.Why We Pass Same Index AgainNotice this line:solve(arr, i, target - arr[i], current);We pass i, not i+1.Why?Because we can reuse the same number unlimited times.If we used i+1, we would move forward and lose repetition.Why Duplicate Combinations Are Not CreatedWe start loop from current index.This guarantees:[2,3]and[3,2]are not both generated.Order remains controlled.Common Mistakes Beginners Make1. Using i+1 Instead of iWrong:solve(arr, i+1, target-arr[i], current)This prevents reuse.2. Forgetting Backtracking StepWrong:current.remove(current.size()-1)Without removing, recursion keeps incorrect values.3. Missing Target == 0 Base CaseThis is where valid answer is stored.Important Interview InsightCombination Sum is a foundational problem.It helps build understanding for:Combination Sum IISubsetsPermutationsN-QueensWord SearchSudoku SolverThis question is frequently asked in coding interviews.Pattern RecognitionUse Backtracking when problem says:Find all combinationsGenerate all subsetsFind all pathsUse recursionExplore possibilitiesOptimized Thinking StrategyWhenever you see:Target sumRepeated selectionMultiple combinationsThink:Backtracking + DFSEdge CasesCase 1candidates = [2]target = 1Output:[]No possible answer.Case 2candidates = [1]target = 3Output:[[1,1,1]]Interview Answer in One Line“We use backtracking to recursively try all candidate numbers while reducing the target and backtrack whenever a path becomes invalid.”Final Java Codeclass Solution {List<List<Integer>> result = new ArrayList<>();public void solve(int[] arr, int index, int target, List<Integer> current) {if (target == 0) {result.add(new ArrayList<>(current));return;}for (int i = index; i < arr.length; i++) {if (arr[i] > target) {continue;}current.add(arr[i]);solve(arr, i, target - arr[i], current);current.remove(current.size() - 1);}}public List<List<Integer>> combinationSum(int[] candidates, int target) {solve(candidates, 0, target, new ArrayList<>());return result;}}Key TakeawaysCombination Sum uses Backtracking.Reuse same element by passing same index.Target becomes smaller in recursion.Backtracking removes last element.Very important for interview preparation.Frequently Asked QuestionsIs Combination Sum DP or Backtracking?It is primarily solved using Backtracking.Dynamic Programming can also solve it but recursion is more common.Why is this Medium difficulty?Because:Requires recursion understandingRequires backtracking logicRequires duplicate preventionCan we sort the array?Yes.Sorting can help with pruning.ConclusionLeetCode 39 Combination Sum is one of the best problems to learn recursion and backtracking.Once you understand this pattern, many interview problems become easier.The loop-based recursive solution is clean, optimized, and interview-friendly.If you master this question, you gain strong understanding of recursive decision trees and combination generation.

LeetcodeMediumRecursionBacktrackingJava
LeetCode 36: Valid Sudoku Explained – Java Solutions, Intuition & Formula Dry Run

LeetCode 36: Valid Sudoku Explained – Java Solutions, Intuition & Formula Dry Run

IntroductionSudoku is a universally beloved puzzle, but validating a Sudoku boardalgorithmically is a classic technical interview question. In this post, we aregoing to dive deep into LeetCode 36: Valid Sudoku.We won't just look at the code; we will explore the intuition behind the problemso you don't have to memorize anything. We’ll cover an ingenious in-placevalidation approach, break down the complex math formula used to check3 \times 3 sub-boxes, and look at an alternative optimal solution usingHashSets.Let's dive in!Understanding the ProblemThe problem asks us to determine if a partially filled 9 \times 9 Sudoku boardis valid. To be valid, the filled cells must follow three straightforward rules:1. Each row must contain the digits 1-9 without repetition.2. Each column must contain the digits 1-9 without repetition.3. Each of the nine 3 \times 3 sub-boxes must contain the digits 1-9 withoutrepetition.Important Note: A valid board doesn't mean the board is fully solvable! We onlycare about checking the numbers that are currently on the board.Intuition: How to Think About the ProblemBefore writing code, how do we, as humans, check if a Sudoku board is valid? Ifyou place a 5 in a cell, you quickly scan horizontally (its row), vertically(its column), and within its small 3 \times 3 square. If you see another 5, theboard is invalid.To translate this to code, we have two choices:1. The Simulation Approach: Go cell by cell. Pick up the number, hide it, andcheck its row, column, and 3 \times 3 box to see if that number existsanywhere else. (This is the approach we will look at first).2. The Memory Approach: Go cell by cell, but keep a "notebook" (like a HashTable) of everything we have seen so far. If we see a number we've alreadywritten down for a specific row, column, or box, it's invalid.Approach 1: The In-Place Validation (Space-Optimized)Here is a brilliant solution that validates the board without using any extradata structures.The Logic: Iterate through every cell on the board. When we find a number, wetemporarily replace it with a . (empty space). Then, we iterate 9 times to checkits entire row, column, and sub-box. If the number is found, we return false.Otherwise, we put the number back and move to the next cell.The Java Codeclass Solution {public boolean isvalid(char[][] board, int i, int j, char k) {for(int m = 0; m < 9; m++) {// Check rowif(board[i][m] == k) return false;// Check columnif(board[m][j] == k) return false;// Check 3x3 sub-boxif(board[3 * (i / 3) + m / 3][3 * (j / 3) + m % 3] == k) return false;}return true;}public boolean isValidSudoku(char[][] board) {for(int i = 0; i < board.length; i++) {for(int j = 0; j < board[0].length; j++) {if(board[i][j] != '.') {char temp = board[i][j];board[i][j] = '.'; // Temporarily remove the numberif(!isvalid(board, i, j, temp)) {return false;}board[i][j] = temp; // Put the number back}}}return true;}}The Math Breakdown: Demystifying the 3 \times 3 Grid FormulaThe hardest part of this code to understand is this exact line: board[3*(i/3) +m/3][3*(j/3) + m%3]How does a single loop variable m (from 0 to 8) traverse a 3 \times 3 grid?Let’s do a dry run.Step 1: Finding the Starting Point of the BoxThe grid is 9 \times 9, broken into nine 3 \times 3 boxes. If we are at a randomcell, say row i = 4, col j = 5, which box are we in? Because integer division inJava drops the decimal:i / 3 = 4 / 3 = 1j / 3 = 5 / 3 = 1Now multiply by 3 to get the actual starting coordinates (top-left corner) ofthat specific sub-box:3 * 1 = 3 (Row offset)3 * 1 = 3 (Col offset) So, the 3 \times 3 box starts at row 3, col 3.Step 2: Traversing the Box (Dry Run)Now, as m goes from 0 to 8, we use m / 3 for rows and m % 3 for columns:m = 0: row offset 0/3 = 0, col offset 0%3 = 0 \rightarrow Checks (3+0, 3+0) = (3, 3)m = 1: row offset 1/3 = 0, col offset 1%3 = 1 \rightarrow Checks (3+0, 3+1) = (3, 4)m = 2: row offset 2/3 = 0, col offset 2%3 = 2 \rightarrow Checks (3+0, 3+2) = (3, 5)m = 3: row offset 3/3 = 1, col offset 3%3 = 0 \rightarrow Checks (3+1, 3+0) = (4, 3)m = 4: row offset 4/3 = 1, col offset 4%3 = 1 \rightarrow Checks (3+1, 3+1) = (4, 4)...and so on up to m = 8.This brilliant math formula maps a 1D loop (0 to 8) directly onto a 2D3 \times 3 grid perfectly! No nested loops needed inside the isvalid function.Approach 2: The HashSet Solution (Single Pass)While the first approach is highly space-efficient, it does a bit of redundantchecking. An alternative approach that interviewers love is using a HashSet.Instead of checking rows and columns every time we see a number, we generate aunique "string signature" for every number and attempt to add it to a HashSet.If we see a 5 at row 0 and col 1, we create three strings:1. "5 in row 0"2. "5 in col 1"3. "5 in block 0-0"The HashSet.add() method returns false if the item already exists in the set. Ifit returns false, we instantly know the board is invalid!HashSet Java Code:class Solution {public boolean isValidSudoku(char[][] board) {HashSet<String> seen = new HashSet<>();for (int i = 0; i < 9; i++) {for (int j = 0; j < 9; j++) {char number = board[i][j];if (number != '.') {// HashSet.add() returns false if the element already existsif (!seen.add(number + " in row " + i) ||!seen.add(number + " in col " + j) ||!seen.add(number + " in block " + i/3 + "-" + j/3)) {return false;}}}}return true;}}Notice how we use i/3 + "-" + j/3 to identify the blocks. Top-left is block 0-0,bottom-right is block 2-2.Time and Space Complexity BreakdownInterviewers will always ask for your complexity analysis. Because a Sudokuboard is strictly fixed at 9 \times 9, the strict Big-O is actually constant.However, let's look at it conceptually as if the board were N \times N.Approach 1: In-Place Validation (Your Solution)Time Complexity: O(1) (Strictly speaking). We traverse 81 cells, and foreach cell, we do at most 9 iterations. 81 \times 9 = 729 operations. Since729 is a constant, it's O(1). (If the board was N \times N, time complexitywould be O(N^3) because for N^2 cells, we iterate N times).Space Complexity: O(1). We only use primitive variables (i, j, k, m, temp).No extra memory is allocated.Approach 2: HashSet ApproachTime Complexity: O(1). We traverse the 81 cells exactly once. Generatingstrings and adding to a HashSet takes O(1) time. (If the board wasN \times N, time complexity would be O(N^2)).Space Complexity: O(1). The HashSet will store a maximum of81 \times 3 = 243 strings. Since this upper limit is fixed, space isconstant.ConclusionThe Valid Sudoku problem is a fantastic exercise in matrix traversal andcoordinate math.When solving this in an interview:1. Use the first approach if you want to impress the interviewer with O(1)space complexity and your deep understanding of math formulas (the /3 and %3trick).2. Use the second approach (HashSet) if you want to show off your knowledge ofdata structures and write highly readable, clean, and clever code.I hope this breakdown gives you the intuition needed so you never have tomemorize the code for LeetCode 36!Happy Coding! Keep Learning🤟

LeetCodeJavaMatrixHash TableRecursionBacktrackingMedium
Roman to Integer – Easy Explanation with Java Solution (LeetCode 13)

Roman to Integer – Easy Explanation with Java Solution (LeetCode 13)

Try the QuestionYou can try solving the problem here before reading the solution:👉 https://leetcode.com/problems/roman-to-integer/Practicing it yourself first helps you understand the logic better.Roman to Integer – Complete GuideRoman numerals are an ancient numbering system used by the Romans. Instead of digits like 1, 2, 3, they used specific characters to represent values.This problem asks us to convert a Roman numeral string into its corresponding integer value.It is a very common interview question and frequently appears in coding platforms like LeetCode, HackerRank, and coding interviews at product-based companies.Roman Numeral SymbolsRoman numerals consist of seven characters, each representing a fixed value.SymbolValueI1V5X10L50C100D500M1000Example conversions:III = 3 → 1 + 1 + 1XII = 12 → 10 + 1 + 1XXVII = 27 → 10 + 10 + 5 + 1 + 1Normally, Roman numerals are written from largest to smallest from left to right.But there is an important exception.The Subtraction RuleIn some cases, Roman numerals use subtraction instead of addition.If a smaller value appears before a larger value, it means we subtract it.Examples:RomanCalculationResultIV5 − 14IX10 − 19XL50 − 1040XC100 − 1090CD500 − 100400CM1000 − 100900These are the only valid subtraction cases.Example WalkthroughExample 1Inputs = "III"ExplanationI + I + I = 1 + 1 + 1 = 3Output3Example 2Inputs = "LVIII"ExplanationL = 50V = 5III = 350 + 5 + 3 = 58Output58Example 3Inputs = "MCMXCIV"ExplanationM = 1000CM = 900XC = 90IV = 4Total = 1994Output1994Intuition Behind the SolutionThe key idea is to compare each Roman numeral with the numeral to its right.Two situations occur:Case 1 — Normal AdditionIf the current value ≥ next valueWe simply add it to the total.ExampleVI5 + 1 = 6Case 2 — Subtraction CaseIf the current value < next valueWe subtract it.ExampleIV5 - 1 = 4Strategy to SolveCreate a HashMap to store Roman numeral values.Start from the rightmost character.Initialize total with the last character's value.Move from right to left.Compare the current numeral with the next numeral.If smaller → subtract.Otherwise → add.Continue until the beginning of the string.This approach works because Roman numerals follow a local comparison rule.Java Implementationclass Solution { public int romanToInt(String s) { HashMap<Character,Integer> mp = new HashMap<>(); mp.put('I',1); mp.put('V',5); mp.put('X',10); mp.put('L',50); mp.put('C',100); mp.put('D',500); mp.put('M',1000); int tot = mp.get(s.charAt(s.length()-1)); char arr[] = s.toCharArray(); int j = s.length()-2; for(int i = arr.length-1; i >= 0; i--){ if(j >= 0){ if(mp.get(arr[j]) < mp.get(arr[i])){ tot -= mp.get(arr[j]); }else if(mp.get(arr[j]) > mp.get(arr[i])){ tot += mp.get(arr[j]); }else{ tot += mp.get(arr[j]); } } j--; } return tot; }}Code ExplanationStep 1 — Store Roman ValuesWe create a HashMap so we can quickly get the numeric value of each Roman character.I → 1V → 5X → 10...This allows O(1) lookup time.Step 2 — Initialize TotalWe start with the last character.int tot = mp.get(s.charAt(s.length()-1));Why?Because the last numeral is always added, never subtracted.Step 3 — Traverse from Right to LeftWe move backward through the string.for(int i = arr.length-1; i >= 0; i--)And compare with the previous character.Step 4 — Apply Roman RulesIf the previous numeral is smaller than the current numeral:tot -= valueExampleIV5 - 1If it is greater or equal:tot += valueExampleVI5 + 1Time ComplexityO(n)Where n = length of the string.We only traverse the string once.Space ComplexityO(1)The HashMap size is constant (only 7 entries).Key Takeaways✔ Roman numerals mostly use addition✔ Subtraction occurs only in six specific cases✔ Comparing current and next value is the simplest approach✔ A single pass solution (O(n)) is enoughThis is why this problem is commonly used to test string processing and logical reasoning in coding interviews.Final ThoughtsThe Roman to Integer problem is a great beginner-friendly problem that teaches:HashMap usageString traversalConditional logicPattern recognitionMastering these simple problems builds a strong foundation for more complex algorithmic questions.If you're preparing for coding interviews, this is definitely a problem you should practice.

LeetCodeJavaString ProblemsEasyRoman Numerals
LeetCode 462: Minimum Moves to Equal Array Elements II | Java Solution Explained (Median Approach)

LeetCode 462: Minimum Moves to Equal Array Elements II | Java Solution Explained (Median Approach)

IntroductionLeetCode 462 is a classic mathematical and greedy problem.We are given an integer array where each operation allows us to:Increment an element by 1Decrement an element by 1Our task is to make all numbers equal while using the minimum number of moves.At first glance, this may look like a simple array problem.But the hidden concept behind this question is:Median propertyGreedy optimizationAbsolute difference minimizationThis problem is extremely popular in coding interviews because it tests logical thinking more than coding complexity.# Problem LinkProblem StatementYou are given an integer array nums.In one move:You can increase an element by 1Or decrease an element by 1You must make all array elements equal.Return the minimum number of operations required.Example 1Input:nums = [1,2,3]Output:2Explanation:[1,2,3]→ [2,2,3]→ [2,2,2]Total operations = 2Example 2Input:nums = [1,10,2,9]Output:16Key ObservationWe need to choose one target value such that all numbers move toward it.Question:Which target gives minimum total moves?Answer:MedianMedian minimizes the sum of absolute differences.Why Median Works?Suppose:nums = [1,2,3,10]If target = 2|1-2| + |2-2| + |3-2| + |10-2|= 1 + 0 + 1 + 8= 10If target = 5|1-5| + |2-5| + |3-5| + |10-5|= 4 + 3 + 2 + 5= 14Median gives minimum moves.Approach 1: Brute ForceIn this approach, we try every possible value as target.For each target:Calculate total operations neededStore minimum answerAlgorithmFind minimum and maximum elementTry every value between themCompute total movesReturn minimumJava Code (Brute Force)class Solution {public int minMoves2(int[] nums) {int min = Integer.MAX_VALUE;int max = Integer.MIN_VALUE;for (int num : nums) {min = Math.min(min, num);max = Math.max(max, num);}int result = Integer.MAX_VALUE;for (int target = min; target <= max; target++) {int moves = 0;for (int num : nums) {moves += Math.abs(num - target);}result = Math.min(result, moves);}return result;}}Time ComplexityO(N × Range)Very slow for large values.Approach 2: Sorting + Median (Optimal)This is the best and most commonly used approach.IdeaSort arrayPick median elementCalculate total absolute differenceStepsStep 1: Sort ArraySorting helps us easily find median.Step 2: Pick MedianMedian index = n / 2Step 3: Calculate MovesFor every element:moves += abs(median - value)Optimal Java Solutionclass Solution {public int minMoves2(int[] nums) {Arrays.sort(nums);int mid = nums.length / 2;int ans = 0;for (int i = 0; i < nums.length; i++) {int diff = Math.abs(nums[mid] - nums[i]);ans += diff;}return ans;}}Code ExplanationStep 1: Sort ArrayArrays.sort(nums);Sorting allows median calculation.Step 2: Get Medianint mid = nums.length / 2;Middle element becomes target.Step 3: Compute DifferenceMath.abs(nums[mid] - nums[i])Find distance from median.Step 4: Add All Movesans += diff;Store total moves.Approach 3: Two Pointer OptimizationAfter sorting, we can use two pointers.Instead of calculating absolute difference manually, we can pair smallest and largest values.IdeaAfter sorting:moves += nums[right] - nums[left]Because both numbers will meet toward median.Java Code (Two Pointer)class Solution {public int minMoves2(int[] nums) {Arrays.sort(nums);int left = 0;int right = nums.length - 1;int moves = 0;while (left < right) {moves += nums[right] - nums[left];left++;right--;}return moves;}}Why Two Pointer Works?Because:Median minimizes total distancePairing smallest and largest values gives direct movement cost.Dry RunInput:nums = [1,10,2,9]Sort:[1,2,9,10]Median:9Operations:|1-9| = 8|2-9| = 7|9-9| = 0|10-9| = 1Total:16Time ComplexitySortingO(N log N)TraversingO(N)TotalO(N log N)Space ComplexityO(1)Ignoring sorting stack.Common Mistakes1. Using Average Instead of MedianMany people think average gives minimum.Wrong.Average minimizes squared difference.Median minimizes absolute difference.2. Forgetting SortingMedian requires sorted order.3. Overflow IssueValues can be large.Sometimes use:long ansfor safer calculation.4. Using Wrong Median IndexCorrect:n / 2Edge CasesCase 1Single element array.Answer = 0Case 2All elements already equal.Answer = 0Case 3Negative numbers.Algorithm still works.FAQsQ1. Why median gives minimum moves?Median minimizes total absolute difference.Q2. Can average work?No.Average does not minimize absolute distance.Q3. Is sorting necessary?Yes.Sorting helps us easily find median.Q4. Which approach is best?Sorting + median approach.Interview InsightInterviewers ask this problem to test:Median property understandingGreedy optimizationMathematical thinkingArray manipulationConclusionLeetCode 462 is one of the most important median-based interview questions.The major learning is:Median minimizes total absolute differenceSorting makes finding median easySum of distances gives answerOnce you understand why median works, this question becomes very simple.

MathMedianMediumLeetCodeJavaArrayTwo PointerSorting
Fast and Slow Pointer Technique in Linked List: Cycle Detection, Proof, and Complete Explanation

Fast and Slow Pointer Technique in Linked List: Cycle Detection, Proof, and Complete Explanation

🚀 Before We StartTry these problems (optional but helpful):https://leetcode.com/problems/linked-list-cycle/https://leetcode.com/problems/linked-list-cycle-ii/🤔 Let’s Talk Honestly…When you first learn this technique, you’re told:👉 “Slow moves 1 step, fast moves 2 steps”👉 “If they meet → cycle exists”But your brain asks:❓ Why 2 steps?❓ Why do they meet at all?❓ Why does resetting pointer find cycle start?❓ Is this magic or logic?👉 Let’s answer each doubt one by one.🧩 Doubt 1: Why do we even use two pointers?❓ Question:Why not just use one pointer?✅ Answer:With one pointer:You can only move forwardYou cannot detect loops efficiently👉 Two pointers create a relative motionThat relative motion is the key.🧩 Doubt 2: Why fast = 2 steps and slow = 1 step?❓ Question:Why exactly 2 and 1?✅ Answer:We need:Fast speed > Slow speedSo that:👉 Fast catches up to slow🧠 Think like this:If both move same speed:Slow → 1 stepFast → 1 step👉 They will NEVER meet ❌If:Slow → 1 stepFast → 2 steps👉 Fast gains 1 node every step🔥 Key Insight:Relative speed = fast - slow = 1👉 This means fast is closing the gap by 1 node every step🧩 Doubt 3: Why do they ALWAYS meet in a cycle?❓ Question:Okay, fast is faster… but why guaranteed meeting?🧠 Imagine a Circular TrackInside a cycle, the list behaves like:Circle of length = λNow:Slow moves 1 stepFast moves 2 steps🔄 Gap BehaviorEach step:Gap = Gap - 1Because fast is catching up.Eventually:Gap = 0👉 They meet 🎯💡 Simple AnalogyTwo runners on a circular track:One is fasterOne is slower👉 Faster runner will lap and meet slower runner🧩 Doubt 4: What if there is NO cycle?❓ Question:Why does this fail without cycle?✅ Answer:If no cycle:List ends → fast reaches null👉 No loop → no meeting🧩 Doubt 5: Where do they meet?❓ Question:Do they meet at cycle start?❌ Answer:No, not necessarily.They meet somewhere inside the cycle🧩 Doubt 6: Then how do we find the cycle start?Now comes the most important part.🎯 SetupLet’s define:a = distance from head to cycle startb = distance from cycle start to meeting pointc = remaining cycleCycle length:λ = b + c🧠 What happens when they meet?Slow distance:a + bFast distance:2(a + b)Using relation:2(a + b) = a + b + kλSolve:a + b = kλ=> a = kλ - b=> a = (k-1)λ + (λ - b)💥 Final Meaninga = distance from meeting point to cycle start🔥 BIG CONCLUSION👉 Distance from head → cycle start👉 = Distance from meeting point → cycle start🧩 Doubt 7: Why resetting pointer works?❓ Question:Why move one pointer to head?✅ Answer:Because:One pointer is a away from startOther is also a away (via cycle)👉 Move both 1 step:They meet at:Cycle Start 🎯🔄 VisualizationHead → → → Cycle Start → → Meeting Point → → back to StartBoth pointers:One from headOne from meeting point👉 Same distance → meet at start🧩 Doubt 8: Can we use fast = 3 steps?❓ Question:Will this work?✅ Answer:Yes, BUT:Math becomes complexHarder to reasonNo extra benefit👉 So we use simplest:2 : 1 ratio🧠 Final Mental ModelThink in 3 steps:1. Different SpeedsFast moves faster → gap reduces2. Circular StructureCycle → positions repeat3. Guaranteed MeetingFinite positions + relative motion → collision🧩 TEMPLATE 1: Detect CycleListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next; if(slow == fast){ return true; }}return false;🧩 TEMPLATE 2: Find Cycle StartListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next; if(slow == fast){ slow = head; while(slow != fast){ slow = slow.next; fast = fast.next; } return slow; }}return null;🧩 TEMPLATE 3: Find Middle of Linked List❓ ProblemFind the middle node of a linked list.🧠 IntuitionFast moves twice as fast:When fast reaches end → slow reaches half👉 Slow = middle💻 CodeListNode slow = head;ListNode fast = head;while(fast != null && fast.next != null){ slow = slow.next; fast = fast.next.next;}return slow;⚠️ Even Length Case1 → 2 → 3 → 4 → 5 → 6👉 Returns 4 (second middle)❓ How to Get First Middle?while(fast.next != null && fast.next.next != null){ slow = slow.next; fast = fast.next.next;}return slow;🧩 Where Else This Technique Is Used?Detect cycleFind cycle startFind middle nodeCheck palindrome (linked list)Split list (merge sort)Intersection problems⚙️ Time & Space ComplexityTime: O(n)Space: O(1)❌ Common MistakesForgetting fast.next != nullThinking meeting point = cycle start ❌Not resetting pointer properly🧠 Final Mental ModelThink in 3 steps:1. Speed DifferenceFast moves faster → gap reduces2. Circular NatureCycle → repeated positions3. Guaranteed MeetingFinite nodes + relative motion → collision🔥 One Line to RememberFast catches slow because it reduces the gap inside a loop.🚀 ConclusionNow you understand:✅ Why fast moves faster✅ Why they meet✅ Why meeting proves cycle✅ Why reset gives cycle start✅ How to find middle using same logic👉 This is not just a trick…It’s a mathematical guarantee based on motion and cycles.💡 Master this once, and you’ll solve multiple linked list problems easily.

Linked ListFast & Slow PointerTwo Pointer TechniqueFloyd AlgorithmDSA PatternsDeep Intuition
Left and Right Sum Differences

Left and Right Sum Differences

LeetCode 2574: Left and Right Sum Differences (Java)The Left and Right Sum Difference problem is a classic array manipulation challenge. It tests your ability to efficiently calculate prefix and suffix values—a skill essential for more advanced algorithms like "Product of Array Except Self."🔗 ResourcesProblem Link: LeetCode 2574 - Left and Right Sum Differences📝 Problem StatementYou are given a 0-indexed integer array nums. You need to return an array answer of the same length where:answer[i] = |leftSum[i] - rightSum[i]|leftSum[i] is the sum of elements to the left of index i.rightSum[i] is the sum of elements to the right of index i.💡 Approach 1: The Three-Array Method (Beginner Friendly)This approach is highly intuitive. We pre-calculate all left sums and all right sums in separate arrays before computing the final difference.The LogicPrefix Array: Fill pref[] by adding the previous element to the cumulative sum.Suffix Array: Fill suff[] by iterating backward from the end of the array.Result: Loop one last time to calculate Math.abs(pref[i] - suff[i]).Java Implementationpublic int[] leftRightDifference(int[] nums) {int n = nums.length;int[] pref = new int[n];int[] suff = new int[n];int[] ans = new int[n];// Calculate Left Sums (Prefix)for (int i = 1; i < n; i++) {pref[i] = pref[i - 1] + nums[i - 1];}// Calculate Right Sums (Suffix)for (int i = n - 2; i >= 0; i--) {suff[i] = suff[i + 1] + nums[i + 1];}// Combine resultsfor (int i = 0; i < n; i++) {ans[i] = Math.abs(pref[i] - suff[i]);}return ans;}Time Complexity: O(n)Space Complexity: O(n) (Uses extra space for pref and suff arrays).🚀 Approach 2: The Running Sum Method (Space Optimized)In technical interviews, you should aim for O(1) extra space. Instead of storing every suffix sum, we calculate the Total Sum first and derive the right sum using logic.The LogicWe use the mathematical property:Right Sum = Total Sum - Left Sum - Current ElementBy maintaining a single variable leftSum that updates as we iterate, we can calculate the result using only the output array.Java Implementationpublic int[] leftRightDifference(int[] nums) {int n = nums.length;int[] ans = new int[n];int totalSum = 0;int leftSum = 0;// 1. Get the total sum of all elementsfor (int num : nums) {totalSum += num;}// 2. Calculate rightSum and difference on the flyfor (int i = 0; i < n; i++) {int rightSum = totalSum - leftSum - nums[i];ans[i] = Math.abs(leftSum - rightSum);// Update leftSum for the next indexleftSum += nums[i];}return ans;}Time Complexity: O(n)Space Complexity: O(1) (Excluding the output array).📊 Summary ComparisonFeatureApproach 1Approach 2Space ComplexityO(n) (Higher)O(1) (Optimal)Logic TypeStorage-basedMathematicalUse CaseBeginnersInterviewsKey TakeawayWhile Approach 1 is easier to visualize, Approach 2 is more professional. It shows you can handle data efficiently without unnecessary memory allocation, which is critical when dealing with large-scale systems.

LeetCodePrefixSuffix
LeetCode 496: Next Greater Element I — Java Solution With All Approaches Explained

LeetCode 496: Next Greater Element I — Java Solution With All Approaches Explained

IntroductionLeetCode 496 Next Greater Element I is your gateway into one of the most important and frequently tested patterns in coding interviews — the Monotonic Stack. Once you understand this problem deeply, problems like Next Greater Element II, Daily Temperatures, and Largest Rectangle in Histogram all start to make sense.Here is the Link of Question -: LeetCode 496This article covers plain English explanation, real life analogy, brute force and optimal approaches in Java, detailed dry runs, complexity analysis, common mistakes, and FAQs.What Is the Problem Really Asking?You have two arrays. nums2 is the main array. nums1 is a smaller subset of nums2. For every element in nums1, find its position in nums2 and look to the right — what is the first element that is strictly greater? If none exists, return -1.Example:nums1 = [4,1,2], nums2 = [1,3,4,2]For 4 in nums2: elements to its right are [2], none greater → -1For 1 in nums2: elements to its right are [3,4,2], first greater is 3For 2 in nums2: no elements to its right → -1Output: [-1, 3, -1]Real Life Analogy — The Taller Person in a QueueImagine you are standing in a queue and you want to know — who is the first person taller than you standing somewhere behind you in the line?You look to your right one by one until you find someone taller. That person is your "next greater element." If everyone behind you is shorter, your answer is -1.Now imagine doing this for every person in the queue efficiently — instead of each person looking one by one, you use a smart system that processes everyone in a single pass. That smart system is the Monotonic Stack.Approach 1: Brute Force (Beginner Friendly)The IdeaFor each element in nums1, find its position in nums2, then scan everything to its right to find the first greater element.javapublic int[] nextGreaterElement(int[] nums1, int[] nums2) { int[] ans = new int[nums1.length]; for (int i = 0; i < nums1.length; i++) { int found = -1; boolean seen = false; for (int j = 0; j < nums2.length; j++) { if (seen && nums2[j] > nums1[i]) { found = nums2[j]; break; } if (nums2[j] == nums1[i]) { seen = true; } } ans[i] = found; } return ans;}Simple to understand but inefficient. For each element in nums1 you scan the entire nums2.Time Complexity: O(m × n) — where m = nums1.length, n = nums2.length Space Complexity: O(1) — ignoring output arrayThis works for the given constraints (n ≤ 1000) but will not scale for larger inputs. The follow-up specifically asks for better.Approach 2: Monotonic Stack + HashMap (Optimal Solution) ✅The IdeaThis is your solution and the best one. The key insight is — instead of answering queries for nums1 elements one by one, precompute the next greater element for every element in nums2 and store results in a HashMap. Then answering nums1 queries becomes just a HashMap lookup.To precompute efficiently, we use a Monotonic Stack — a stack that always stays in decreasing order from bottom to top.Why traverse from right to left? Because we are looking for the next greater element to the right. Starting from the right end, by the time we process any element, we have already seen everything to its right.Algorithm:Traverse nums2 from right to leftMaintain a stack of "candidate" next greater elementsFor current element, pop all stack elements that are smaller or equal — they can never be the next greater for anything to the leftIf stack is empty → next greater is -1, else → top of stack is the answerPush current element onto stackStore result in HashMapLook up each nums1 element in the HashMapjavapublic int[] nextGreaterElement(int[] nums1, int[] nums2) { Stack<Integer> st = new Stack<>(); HashMap<Integer, Integer> mp = new HashMap<>(); // Precompute next greater for every element in nums2 for (int i = nums2.length - 1; i >= 0; i--) { // Pop elements smaller than current — they are useless while (!st.empty() && nums2[i] >= st.peek()) { st.pop(); } // Top of stack is the next greater, or -1 if empty mp.put(nums2[i], st.empty() ? -1 : st.peek()); // Push current element as a candidate for elements to its left st.push(nums2[i]); } // Answer queries for nums1 int[] ans = new int[nums1.length]; for (int i = 0; i < nums1.length; i++) { ans[i] = mp.get(nums1[i]); } return ans;}Why Is the Stack Monotonic?After popping smaller elements, the stack always maintains a decreasing order from bottom to top. This means the top of the stack at any point is always the smallest element seen so far to the right — making it the best candidate for "next greater."This is called a Monotonic Decreasing Stack and it is the heart of this entire pattern.Detailed Dry Run — nums2 = [1,3,4,2]We traverse from right to left:i = 3, nums2[3] = 2Stack is emptyNo elements to popStack empty → mp.put(2, -1)Push 2 → stack: [2]i = 2, nums2[2] = 4Stack top is 2, and 4 >= 2 → pop 2 → stack: []Stack empty → mp.put(4, -1)Push 4 → stack: [4]i = 1, nums2[1] = 3Stack top is 4, and 3 < 4 → stop poppingStack not empty → mp.put(3, 4)Push 3 → stack: [4, 3]i = 0, nums2[0] = 1Stack top is 3, and 1 < 3 → stop poppingStack not empty → mp.put(1, 3)Push 1 → stack: [4, 3, 1]HashMap after processing nums2:1 → 3, 2 → -1, 3 → 4, 4 → -1Now answer nums1 = [4, 1, 2]:nums1[0] = 4 → mp.get(4) = -1nums1[1] = 1 → mp.get(1) = 3nums1[2] = 2 → mp.get(2) = -1✅ Output: [-1, 3, -1]Time Complexity: O(n + m) — n for processing nums2, m for answering nums1 queries Space Complexity: O(n) — HashMap and Stack both store at most n elementsWhy Pop Elements Smaller Than Current?This is the most important thing to understand in this problem. When we are at element x and we see a stack element y where y < x, we pop y. Why?Because x is to the right of everything we will process next (we go right to left), and x is already greater than y. So for any element to the left of x, if they are greater than y, they are definitely also greater than y — meaning y would never be the "next greater" for anything. It becomes useless and gets discarded.This is why the stack stays decreasing — every element we keep is a legitimate candidate for being someone's next greater element.How This Differs From Previous Stack ProblemsYou have been solving stack problems with strings — backspace, stars, adjacent duplicates. This problem introduces the stack for arrays and searching, which is a step up in complexity.The pattern shift is: instead of using the stack to build or reduce a string, we use it to maintain a window of candidates while scanning. This monotonic stack idea is what powers many hard problems like Largest Rectangle in Histogram, Trapping Rain Water, and Daily Temperatures.Common Mistakes to AvoidUsing >= instead of > in the pop condition We pop when nums2[i] >= st.peek(). If you use only >, equal elements stay on the stack and give wrong answers since we need strictly greater.Traversing left to right instead of right to left Going left to right makes it hard to know what is to the right of the current element. Always go right to left for "next greater to the right" problems.Forgetting HashMap lookup handles the nums1 query efficiently Some people recompute inside the nums1 loop. Always precompute in a HashMap — that is the whole point of the optimization.FAQs — People Also AskQ1. What is a Monotonic Stack and why is it used in LeetCode 496? A Monotonic Stack is a stack that maintains its elements in either increasing or decreasing order. In LeetCode 496, a Monotonic Decreasing Stack is used to efficiently find the next greater element for every number in nums2 in a single pass, reducing time complexity from O(n²) to O(n).Q2. What is the time complexity of LeetCode 496 optimal solution? The optimal solution runs in O(n + m) time where n is the length of nums2 and m is the length of nums1. Processing nums2 takes O(n) and answering all nums1 queries via HashMap takes O(m).Q3. Why do we traverse nums2 from right to left? Because we are looking for the next greater element to the right. Starting from the right end means by the time we process any element, we have already seen all elements to its right and stored them in the stack as candidates.Q4. Is LeetCode 496 asked in coding interviews? Yes, it is commonly used as an introduction to the Monotonic Stack pattern at companies like Amazon, Google, and Microsoft. It often appears as a warmup before harder follow-ups like Next Greater Element II (circular array) or Daily Temperatures.Q5. What is the difference between LeetCode 496 and LeetCode 739 Daily Temperatures? Both use the same Monotonic Stack pattern. In 496 you return the actual next greater value. In 739 you return the number of days (index difference) until a warmer temperature. The core stack logic is identical.Similar LeetCode Problems to Practice Next739. Daily Temperatures — Medium — days until warmer temperature, same pattern503. Next Greater Element II — Medium — circular array version901. Online Stock Span — Medium — monotonic stack with span counting84. Largest Rectangle in Histogram — Hard — classic monotonic stack42. Trapping Rain Water — Hard — monotonic stack or two pointerConclusionLeetCode 496 Next Greater Element I is the perfect entry point into the Monotonic Stack pattern. The brute force is easy to understand, but the real learning happens when you see why the stack stays decreasing and how that single insight collapses an O(n²) problem into O(n).Once you truly understand why we pop smaller elements and how the HashMap bridges the gap between precomputation and query answering, the entire family of Next Greater Element problems becomes approachable — including the harder circular and histogram variants.

ArrayStackMonotonic StackHashMapEasyLeetCode
LeetCode 143 Reorder List - Java Solution Explained

LeetCode 143 Reorder List - Java Solution Explained

IntroductionLeetCode 143 Reorder List is one of those problems that looks simple when you read it but immediately makes you wonder — where do I even start? There is no single trick that solves it. Instead it combines three separate linked list techniques into one clean solution. Mastering this problem means you have genuinely understood linked lists at an intermediate level.You can find the problem here — LeetCode 143 Reorder List.This article walks through everything — what the problem wants, the intuition behind each step, all three techniques used, a detailed dry run, complexity analysis, and common mistakes beginners make.What Is the Problem Really Asking?You have a linked list: L0 → L1 → L2 → ... → LnYou need to reorder it to: L0 → Ln → L1 → Ln-1 → L2 → Ln-2 → ...In plain English — take one node from the front, then one from the back, then one from the front, then one from the back, and keep alternating until all nodes are used.Example:Input: 1 → 2 → 3 → 4 → 5Output: 1 → 5 → 2 → 4 → 3Node 1 from front, Node 5 from back, Node 2 from front, Node 4 from back, Node 3 stays in middle.Real Life Analogy — Dealing Cards From Both EndsImagine you have a deck of cards laid out in a line face up: 1, 2, 3, 4, 5. Now you deal them by alternately picking from the left end and the right end of the line:Pick 1 from left → placePick 5 from right → place after 1Pick 2 from left → place after 5Pick 4 from right → place after 2Pick 3 (only one left) → place after 4Result: 1, 5, 2, 4, 3That is exactly what the problem wants. The challenge is doing this efficiently on a singly linked list where you cannot just index from the back.Why This Problem Is Hard for BeginnersIn an array you can just use two pointers — one at the start and one at the end — and swap/interleave easily. But a singly linked list only goes forward. You cannot go backwards. You cannot easily access the last element.This is why the problem requires a three-step approach that cleverly works around the limitations of a singly linked list.The Three Step ApproachEvery experienced developer solves this problem in exactly three steps:Step 1 — Find the middle of the linked list using the Fast & Slow Pointer techniqueStep 2 — Reverse the second half of the linked listStep 3 — Merge the two halves by alternating nodes from eachLet us understand each step deeply before looking at code.Step 1: Finding the Middle — Fast & Slow PointerThe Fast & Slow Pointer technique (also called Floyd's algorithm) uses two pointers moving at different speeds through the list:slow moves one step at a timefast moves two steps at a timeWhen fast reaches the end, slow is exactly at the middle. This works because fast covers twice the distance of slow in the same number of steps.ListNode fast = head;ListNode slow = head;while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next;}// slow is now at the middleFor 1 → 2 → 3 → 4 → 5:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: slow=3, fast=5 (fast.next is null, stop)Middle is node 3For 1 → 2 → 3 → 4:Start: slow=1, fast=1Step 1: slow=2, fast=3Step 2: fast.next.next is null, stopslow=2, middle is node 2After finding the middle, we cut the list in two by setting slow.next = null. This disconnects the first half from the second half.Step 2: Reversing the Second HalfOnce we have the second half starting from slow.next, we reverse it. After reversal, what was the last node becomes the first — giving us easy access to the back elements of the original list.public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; // save next curr.next = prev; // reverse the link prev = curr; // move prev forward curr = next; // move curr forward } return prev; // prev is the new head}For second half 3 → 4 → 5 (from the first example):Reverse → 5 → 4 → 3Now we have:First half: 1 → 2 → 3 (but 3 is the end since we cut at slow)Wait — actually after cutting at slow=3: first half is 1 → 2 → 3, second half reversed is 5 → 4Let us be precise. For 1 → 2 → 3 → 4 → 5, slow stops at 3. slow.next = null cuts to give:First half: 1 → 2 → 3 → nullSecond half before reverse: 4 → 5Second half after reverse: 5 → 4Step 3: Merging Two HalvesNow we have two lists and we merge them by alternately taking one node from each:Take from first half, take from second half, take from first half, take from second half...ListNode orig = head; // pointer for first halfListNode newhead = second; // pointer for reversed second halfwhile (newhead != null) { ListNode temp1 = orig.next; // save next of first half ListNode temp2 = newhead.next; // save next of second half orig.next = newhead; // first → second newhead.next = temp1; // second → next of first orig = temp1; // advance first half pointer newhead = temp2; // advance second half pointer}Why do we loop on newhead != null and not orig != null? Because the second half is always equal to or shorter than the first half (we cut at middle). Once the second half is exhausted, the remaining first half nodes are already in the correct position.Complete Solutionclass Solution { public ListNode reverse(ListNode head) { ListNode curr = head; ListNode prev = null; while (curr != null) { ListNode next = curr.next; curr.next = prev; prev = curr; curr = next; } return prev; } public void reorderList(ListNode head) { // Step 1: Find middle using fast & slow pointer ListNode fast = head; ListNode slow = head; while (fast.next != null && fast.next.next != null) { fast = fast.next.next; slow = slow.next; } // Step 2: Reverse second half ListNode newhead = reverse(slow.next); slow.next = null; // cut the list into two halves // Step 3: Merge two halves alternately ListNode orig = head; while (newhead != null) { ListNode temp1 = orig.next; ListNode temp2 = newhead.next; orig.next = newhead; newhead.next = temp1; orig = temp1; newhead = temp2; } }}Complete Dry Run — head = [1, 2, 3, 4, 5]Step 1: Find MiddleList: 1 → 2 → 3 → 4 → 5Initial: slow=1, fast=1Iteration 1: slow=2, fast=3Iteration 2: fast.next=4, fast.next.next=5 → slow=3, fast=5fast.next is null → stopslow is at node 3Step 2: Cut and ReverseCut: slow.next = nullFirst half: 1 → 2 → 3 → nullSecond half: 4 → 5Reverse second half 4 → 5:prev=null, curr=4 → next=5, 4.next=null, prev=4, curr=5prev=4, curr=5 → next=null, 5.next=4, prev=5, curr=nullReturn prev=5Reversed second half: 5 → 4 → nullStep 3: Mergeorig=1, newhead=5Iteration 1:temp1 = orig.next = 2temp2 = newhead.next = 4orig.next = newhead → 1.next = 5newhead.next = temp1 → 5.next = 2orig = temp1 = 2newhead = temp2 = 4List so far: 1 → 5 → 2 → 3Iteration 2:temp1 = orig.next = 3temp2 = newhead.next = nullorig.next = newhead → 2.next = 4newhead.next = temp1 → 4.next = 3orig = temp1 = 3newhead = temp2 = nullList so far: 1 → 5 → 2 → 4 → 3newhead is null → loop endsFinal result: 1 → 5 → 2 → 4 → 3 ✅Dry Run — head = [1, 2, 3, 4]Step 1: Find MiddleInitial: slow=1, fast=1Iteration 1: slow=2, fast=3fast.next=4, fast.next.next=null → stopslow is at node 2Step 2: Cut and ReverseFirst half: 1 → 2 → nullSecond half: 3 → 4Reversed: 4 → 3 → nullStep 3: Mergeorig=1, newhead=4Iteration 1:temp1=2, temp2=31.next=4, 4.next=2orig=2, newhead=3List: 1 → 4 → 2 → 3Iteration 2:temp1=null (2.next was originally 3 but we cut at slow=2, so 2.next = null... wait)Actually after cutting at slow=2, first half is 1 → 2 → null, so orig when it becomes 2, orig.next = null.temp1 = orig.next = nulltemp2 = newhead.next = null2.next = 3, 3.next = nullorig = null, newhead = nullnewhead is null → stopFinal result: 1 → 4 → 2 → 3 ✅Why slow.next = null Must Come After Saving newheadThis is a subtle but critical ordering detail in the code. Look at this sequence:ListNode newhead = reverse(slow.next); // save reversed second half FIRSTslow.next = null; // THEN cut the listIf you cut first (slow.next = null) and then try to reverse, you lose the reference to the second half entirely because slow.next is already null. Always save the second half reference before cutting.Time and Space ComplexityTime Complexity: O(n) — each of the three steps (find middle, reverse, merge) makes a single pass through the list. Total is 3 passes = O(3n) = O(n).Space Complexity: O(1) — everything is done by rearranging pointers in place. No extra arrays, no recursion stack, no additional data structures. Just a handful of pointer variables.This is the optimal solution — linear time and constant space.Alternative Approach — Using ArrayList (Simpler but O(n) Space)If you find the three-step approach hard to implement under interview pressure, here is a simpler approach using extra space:public void reorderList(ListNode head) { // store all nodes in ArrayList for random access List<ListNode> nodes = new ArrayList<>(); ListNode curr = head; while (curr != null) { nodes.add(curr); curr = curr.next; } int left = 0; int right = nodes.size() - 1; while (left < right) { nodes.get(left).next = nodes.get(right); left++; if (left == right) break; // odd number of nodes nodes.get(right).next = nodes.get(left); right--; } nodes.get(left).next = null; // terminate the list}This is much easier to understand and code. Store all nodes in an ArrayList, use two pointers from both ends, and wire up the next pointers.Time Complexity: O(n) Space Complexity: O(n) — ArrayList stores all nodesThis is acceptable in most interviews. Mention the O(1) space approach as the optimal solution if asked.Common Mistakes to AvoidNot cutting the list before merging If you do not set slow.next = null after finding the middle, the first half still points into the second half. During merging, this creates cycles and infinite loops. Always cut before merging.Wrong loop condition for finding the middle The condition fast.next != null && fast.next.next != null ensures fast does not go out of bounds when jumping two steps. Using just fast != null && fast.next != null moves slow one step too far for even-length lists.Looping on orig instead of newhead The merge loop should run while newhead != null, not while orig != null. The second half is always shorter or equal to the first half. Once the second half is done, the remaining first half is already correctly placed.Forgetting to save both temp pointers before rewiring In the merge step, you must save both orig.next and newhead.next before changing any pointers. Changing orig.next first and then trying to access orig.next to save it gives you the wrong node.How This Problem Combines Multiple PatternsThis problem is special because it does not rely on a single technique. It is a combination of three fundamental linked list operations:Fast & Slow Pointer — you saw this concept in problems like finding the middle of a list and detecting cycles (LeetCode 141, 142).Reverse a Linked List — the most fundamental linked list operation, appears in LeetCode 206 and as a subtask in dozens of problems.Merge Two Lists — similar to merging two sorted lists (LeetCode 21) but here order is not sorted, it is alternating.Solving this problem proves you are comfortable with all three patterns individually and can combine them when needed.FAQs — People Also AskQ1. What is the most efficient approach for LeetCode 143 Reorder List? The three-step approach — find middle with fast/slow pointer, reverse second half, merge alternately — runs in O(n) time and O(1) space. It is the optimal solution. The ArrayList approach is O(n) time and O(n) space but simpler to code.Q2. Why use fast and slow pointer to find the middle? Because a singly linked list has no way to access elements by index. You cannot just do list[length/2]. The fast and slow pointer technique finds the middle in a single pass without knowing the length beforehand.Q3. Why reverse the second half instead of the first half? The problem wants front-to-back alternation. If you reverse the second half, its first node is the original last node — exactly what you need to interleave with the front of the first half. Reversing the first half would give the wrong order.Q4. What is the time complexity of LeetCode 143? O(n) time for three linear passes (find middle, reverse, merge). O(1) space since all operations are in-place pointer manipulations with no extra data structures.Q5. Is LeetCode 143 asked in coding interviews? Yes, frequently at companies like Amazon, Google, Facebook, and Microsoft. It is considered a benchmark problem for linked list mastery because it requires combining three separate techniques cleanly under pressure.Similar LeetCode Problems to Practice Next206. Reverse Linked List — Easy — foundation for step 2 of this problem876. Middle of the Linked List — Easy — fast & slow pointer isolated21. Merge Two Sorted Lists — Easy — merging technique foundation234. Palindrome Linked List — Easy — also uses find middle + reverse second half148. Sort List — Medium — merge sort on linked list, uses same split techniqueConclusionLeetCode 143 Reorder List is one of the best Medium linked list problems because it forces you to think in multiple steps and combine techniques rather than apply a single pattern. The fast/slow pointer finds the middle efficiently without knowing the length. Reversing the second half turns the "cannot go backwards" limitation of singly linked lists into a non-issue. And the alternating merge weaves everything together cleanly.Work through the dry runs carefully — especially the pointer saving step in the merge. Once you see why each step is necessary and how they connect, this problem will always feel approachable no matter when it shows up in an interview.

LeetCodeJavaLinked ListTwo PointerFast Slow PointerMedium
Stack Problems Explained: NGR, NGL, NSR, NSL — The Four-Problem Family You Must Master

Stack Problems Explained: NGR, NGL, NSR, NSL — The Four-Problem Family You Must Master

IntroductionAmong all the problems built around the Stack data structure, four stand out as a family — they appear repeatedly in coding interviews, competitive programming, and real-world software systems. These four are the Next Greater to the Right (NGR), Next Greater to the Left (NGL), Next Smaller to the Right (NSR), and Next Smaller to the Left (NSL).What makes them special is not just their individual solutions — it is the fact that all four are solved by a single elegant technique called the Monotonic Stack. Learn the pattern once, and you have all four in your toolkit permanently.This guide breaks down each problem with a full solution, step-by-step dry run, edge cases, and the exact reasoning behind every decision in the code. Whether you are preparing for a technical interview or simply want to deeply understand this pattern — you are in the right place.The Story That Makes This ClickBefore any code, let us understand this family of problems with one real-world story.Imagine you are standing in a queue at a cricket stadium. Everyone in the queue has a different height. You are standing somewhere in the middle. You look to your right and ask — who is the first person taller than me? That is your Next Greater Element to the Right (NGR).Now you look to your left — who is the first person taller than me on this side? That is your Next Greater to the Left (NGL).Now instead of taller, you ask shorter — who is the first shorter person to my right? That is Next Smaller to the Right (NSR).And shorter to your left? That is Next Smaller to the Left (NSL).Same queue. Same people. Four different questions. Four different answers. This is exactly what these four problems are about — and they all share the same solution pattern.What Is a Monotonic Stack?A monotonic stack is just a regular stack with one rule — elements inside it are always maintained in a specific order, either always increasing or always decreasing from bottom to top.You never enforce this rule explicitly. It happens naturally as you pop elements that violate the order before pushing a new one. This popping step is the key insight — the moment you pop an element, you have found its answer for the current element being processed.This one pattern solves all four problems. Only two small details change between them — the direction of traversal and the comparison condition inside the while loop.The Four Problems — Quick ReferenceProblemDirectionWhat You WantProblem LinksNGRTraverse Right to LeftFirst greater on right"Next Greater Element GFG"NGLTraverse Left to RightFirst greater on left"Previous Greater Element GFG"NSRTraverse Right to LeftFirst smaller on right"Next Smaller Element GFG"NSLTraverse Left to RightFirst smaller on left"Previous Smaller Element GFG"Problem 1 — Next Greater Element to Right (NGR)GFG Problem: Search "Next Greater Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 32.95% | Submissions: 515K+The QuestionFor each element in the array, find the first element to its right that is strictly greater than it. If none exists, return -1.Input: [1, 3, 2, 4] Output: [3, 4, 4, -1]Input: [6, 8, 0, 1, 3] Output: [8, -1, 1, 3, -1]Real World ExampleThink of the stock market. You have daily closing prices: [1, 3, 2, 4]. For each day, you want to know — on which future day will the price first exceed today's price? Day 1 has price 1, first exceeded on Day 2 with price 3. Day 2 has price 3, first exceeded on Day 4 with price 4. Day 3 has price 2, also first exceeded on Day 4 with price 4. Day 4 has no future day, so -1. This is exactly NGR — and it is literally used in financial software to detect price breakout points.The IntuitionThe brute force is obvious — for every element, scan everything to its right and find the first greater one. That works but it is O(n²). For an array of 10⁶ elements that becomes 10¹² operations. It will time out on any large input.The stack insight is this — traverse right to left. As you move left, the stack always holds elements you have already seen on the right side. These are the candidates for being the next greater element. Before pushing the current element, pop all stack elements that are smaller than or equal to it. Why? Because the current element is blocking them — for any future element to the left, the current element will always be encountered first, so those smaller popped elements can never be an answer for anything. Whatever remains on top of the stack after popping is the answer for the current element.Step-by-Step Dry RunArray: [1, 3, 2, 4], traversing right to left.i=3, element is 4. Stack is empty. Answer for index 3 is -1. Push 4. Stack: [4]i=2, element is 2. Top of stack is 4, which is greater than 2. Answer for index 2 is 4. Push 2. Stack: [4, 2]i=1, element is 3. Top of stack is 2, which is not greater than 3. Pop 2. Top is now 4, which is greater than 3. Answer for index 1 is 4. Push 3. Stack: [4, 3]i=0, element is 1. Top of stack is 3, which is greater than 1. Answer for index 0 is 3. Push 1. Stack: [4, 3, 1]Answers collected right to left: [-1, 4, 4, 3] After Collections.reverse(): [3, 4, 4, -1] ✓The Code// NGR — Next Greater Element to Rightclass Solution {public ArrayList<Integer> nextLargerElement(int[] arr) {ArrayList<Integer> result = new ArrayList<>();Stack<Integer> st = new Stack<>();// Traverse from RIGHT to LEFTfor (int i = arr.length - 1; i >= 0; i--) {// Pop all elements smaller than or equal to current// They can never be the answer for any element to the leftwhile (!st.empty() && arr[i] >= st.peek()) {st.pop();}// Whatever is on top now is the next greater elementif (st.empty()) {result.add(-1);} else {result.add(st.peek());}// Push current — it is a candidate for elements to the leftst.push(arr[i]);}// Collected answers right to left, so reverse before returningCollections.reverse(result);return result;}}Edge CasesAll elements decreasing — Input: [5, 4, 3, 2, 1] Output: [-1, -1, -1, -1, -1] Every element has no greater element to its right. Traversing right to left, each new element is larger than everything already in the stack, so the stack gets cleared and the answer is always -1.All elements increasing — Input: [1, 2, 3, 4, 5] Output: [2, 3, 4, 5, -1] Each element's next greater is simply the next element in the array. The last element always gets -1 since nothing exists to its right.All elements equal — Input: [3, 3, 3, 3] Output: [-1, -1, -1, -1] Equal elements do not count as greater. The pop condition uses >= so equals get removed from the stack, ensuring duplicates never answer each other.Single element — Input: [7] Output: [-1] Nothing to the right, always -1.Why only 32.95% accuracy on GFG? Most people either forget to reverse the result at the end, use the wrong comparison in the while loop, or submit a brute force O(n²) solution that times out on large inputs.Problem 2 — Next Greater Element to Left / Previous Greater Element (NGL)GFG Problem: Search "Previous Greater Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 68.93% | Submissions: 7K+The QuestionFor each element in the array, find the first element to its left that is strictly greater than it. If none exists, return -1.Input: [10, 4, 2, 20, 40, 12, 30] Output: [-1, 10, 4, -1, -1, 40, 40]Real World ExampleImagine you are a junior employee at a company. For each person in the office, you want to know — who is the first senior person sitting to their left who earns more? This is NGL. It is used in organizational hierarchy systems, salary band analysis tools, and even in database query optimizers to find the nearest dominant record on the left side.The IntuitionThis is the mirror image of NGR. Instead of traversing right to left, we traverse left to right. The stack holds elements we have already seen from the left side — these are candidates for being the previous greater element. For each new element, pop everything from the stack that is smaller than or equal to it. Whatever remains on top is the first greater element to its left. Then push the current element for future use.No reverse is needed here because we are already going left to right and building the result in order.Step-by-Step Dry RunArray: [10, 4, 2, 20, 40, 12, 30], traversing left to right.i=0, element is 10. Stack is empty. Answer is -1. Push 10. Stack: [10]i=1, element is 4. Top is 10, greater than 4. Answer is 10. Push 4. Stack: [10, 4]i=2, element is 2. Top is 4, greater than 2. Answer is 4. Push 2. Stack: [10, 4, 2]i=3, element is 20. Top is 2, not greater than 20. Pop 2. Top is 4, not greater. Pop 4. Top is 10, not greater. Pop 10. Stack is empty. Answer is -1. Push 20. Stack: [20]i=4, element is 40. Top is 20, not greater. Pop 20. Stack empty. Answer is -1. Push 40. Stack: [40]i=5, element is 12. Top is 40, greater than 12. Answer is 40. Push 12. Stack: [40, 12]i=6, element is 30. Top is 12, not greater than 30. Pop 12. Top is 40, greater than 30. Answer is 40. Push 30. Stack: [40, 30]Result: [-1, 10, 4, -1, -1, 40, 40] ✓ No reverse needed.The Code// NGL — Next Greater Element to Left (Previous Greater Element)class Solution {static ArrayList<Integer> preGreaterEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse LEFT to RIGHT — no reverse neededfor (int i = 0; i <= arr.length - 1; i++) {// Pop all elements smaller than or equal to currentwhile (!st.empty() && arr[i] >= st.peek()) {st.pop();}// Top of stack is the previous greater elementif (!st.empty() && st.peek() > arr[i]) {result.add(st.peek());} else {result.add(-1);}// Push current for future elementsst.push(arr[i]);}return result;}}Edge CasesStrictly increasing array — Input: [10, 20, 30, 40] Output: [-1, -1, -1, -1] Each new element is larger than everything before it, so the stack always gets fully cleared. No previous greater exists for any element.First element is always -1 — regardless of its value, the first element has nothing to its left. The stack is empty at i=0, so the answer is always -1 for index 0. This is guaranteed by the logic.Duplicate values — Input: [5, 5, 5] Output: [-1, -1, -1] Equal elements do not qualify as greater. The pop condition uses >= so duplicates get removed from the stack and never answer each other.Problem 3 — Next Smaller Element to Right (NSR)GFG Problem: Search "Next Smaller Element" on GeeksForGeeks Difficulty: Medium | Accuracy: 36.26% | Submissions: 225K+The QuestionFor each element in the array, find the first element to its right that is strictly smaller than it. If none exists, return -1.Input: [4, 8, 5, 2, 25] Output: [2, 5, 2, -1, -1]Input: [13, 7, 6, 12] Output: [7, 6, -1, -1]Real World ExampleYou work at a warehouse. Shelves have items of weights: [4, 8, 5, 2, 25] kg. For each item, the system needs to find the first lighter item sitting to its right on the shelf — this is used to optimize load balancing and shelf arrangement algorithms. Item of 4 kg — first lighter to the right is 2 kg. Item of 8 kg — first lighter is 5 kg. Item of 5 kg — first lighter is 2 kg. Items of 2 kg and 25 kg have no lighter item to their right, so -1.The IntuitionNSR is structurally identical to NGR — we traverse right to left and collect answers, then reverse. The only change is the pop condition. In NGR we popped elements smaller than or equal to current because we wanted greater. Here we want smaller, so we pop elements greater than or equal to current. After popping, whatever remains on top is the first smaller element to the right.Step-by-Step Dry RunArray: [4, 8, 5, 2, 25], traversing right to left.i=4, element is 25. Stack is empty. Answer is -1. Push 25. Stack: [25]i=3, element is 2. Top is 25, which is greater than or equal to 2. Pop 25. Stack is empty. Answer is -1. Push 2. Stack: [2]i=2, element is 5. Top is 2, which is less than 5. Answer is 2. Push 5. Stack: [2, 5]i=1, element is 8. Top is 5, which is less than 8. Answer is 5. Push 8. Stack: [2, 5, 8]i=0, element is 4. Top is 8, which is greater than or equal to 4. Pop 8. Top is 5, which is greater than or equal to 4. Pop 5. Top is 2, which is less than 4. Answer is 2. Push 4. Stack: [2, 4]Answers collected right to left: [-1, -1, 2, 5, 2] After Collections.reverse(): [2, 5, 2, -1, -1] ✓The Code// NSR — Next Smaller Element to Rightclass Solution {static ArrayList<Integer> nextSmallerEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse RIGHT to LEFTfor (int i = arr.length - 1; i >= 0; i--) {// Pop elements greater than or equal to current// Opposite of NGR — we want smaller, so clear the bigger oneswhile (!st.empty() && arr[i] <= st.peek()) {st.pop();}// Top is now the next smaller elementif (!st.empty() && st.peek() < arr[i]) {result.add(st.peek());} else {result.add(-1);}st.push(arr[i]);}Collections.reverse(result);return result;}}Edge CasesStrictly decreasing array — Input: [5, 4, 3, 2, 1] Output: [4, 3, 2, 1, -1] Each element's next smaller is simply the next element in the array. Last element is always -1.Strictly increasing array — Input: [1, 2, 3, 4, 5] Output: [-1, -1, -1, -1, -1] No element has a smaller element to its right since the array only grows.Last element is always -1 — nothing exists to its right regardless of its value.Single element — Input: [42] Output: [-1]Why 36.26% accuracy on GFG? The most common mistake is keeping the NGR pop condition (arr[i] >= st.peek()) and only changing the problem description in your head. The pop condition must flip to arr[i] <= st.peek() for NSR. Forgetting this gives completely wrong answers that look plausible, which makes the bug hard to spot.Problem 4 — Next Smaller Element to Left / Previous Smaller Element (NSL)GFG Problem: Search "Previous Smaller Element" on GeeksForGeeksThe QuestionFor each element in the array, find the first element to its left that is strictly smaller than it. If none exists, return -1.Input: [4, 8, 5, 2, 25] Output: [-1, 4, 4, -1, 2]Real World ExampleSame warehouse. Now the system looks left instead of right. For the item weighing 8 kg, the first lighter item to its left is 4 kg. For 25 kg, the first lighter to its left is 2 kg. For 4 kg, nothing lighter exists to its left so -1. For 2 kg, nothing lighter to its left so -1. This kind of lookback query appears in time-series analysis, price history tracking, and sensor data processing.The IntuitionNSL is the mirror of NSR, exactly as NGL was the mirror of NGR. We traverse left to right (no reverse needed). We maintain a stack of candidates from the left. For each element, pop all elements greater than or equal to it — they cannot be the answer since they are not smaller. Whatever remains on top is the first smaller element to the left. Push current and move on.Step-by-Step Dry RunArray: [4, 8, 5, 2, 25], traversing left to right.i=0, element is 4. Stack is empty. Answer is -1. Push 4. Stack: [4]i=1, element is 8. Top is 4, which is less than 8. Answer is 4. Push 8. Stack: [4, 8]i=2, element is 5. Top is 8, which is greater than or equal to 5. Pop 8. Top is 4, which is less than 5. Answer is 4. Push 5. Stack: [4, 5]i=3, element is 2. Top is 5, greater than or equal to 2. Pop 5. Top is 4, greater than or equal to 2. Pop 4. Stack is empty. Answer is -1. Push 2. Stack: [2]i=4, element is 25. Top is 2, which is less than 25. Answer is 2. Push 25. Stack: [2, 25]Result: [-1, 4, 4, -1, 2] ✓ No reverse needed.The Code// NSL — Next Smaller Element to Left (Previous Smaller Element)class Solution {static ArrayList<Integer> prevSmallerEle(int[] arr) {Stack<Integer> st = new Stack<>();ArrayList<Integer> result = new ArrayList<>();// Traverse LEFT to RIGHT — no reverse neededfor (int i = 0; i < arr.length; i++) {// Pop elements greater than or equal to currentwhile (!st.empty() && arr[i] <= st.peek()) {st.pop();}// Top is the previous smaller elementif (!st.empty() && st.peek() < arr[i]) {result.add(st.peek());} else {result.add(-1);}st.push(arr[i]);}return result;}}Edge CasesFirst element is always -1 — nothing exists to its left. Stack is empty at i=0 every time.All same elements — Input: [5, 5, 5, 5] Output: [-1, -1, -1, -1] Equal elements do not qualify as smaller. The condition arr[i] <= st.peek() ensures equals are popped and never answer each other.Single element — Input: [9] Output: [-1]The Master Cheat SheetThis is the one table to save and refer to whenever you encounter any of these four problems.VariantTraverse DirectionPop ConditionReverse Result?NGR — Next Greater RightRight to Leftarr[i] >= st.peek()YesNGL — Next Greater LeftLeft to Rightarr[i] >= st.peek()NoNSR — Next Smaller RightRight to Leftarr[i] <= st.peek()YesNSL — Next Smaller LeftLeft to Rightarr[i] <= st.peek()NoTwo rules to remember forever:Rule 1 — Direction. If you are looking to the right, traverse right to left and reverse at the end. If you are looking to the left, traverse left to right and no reverse is needed.Rule 2 — Pop Condition. If you want a greater element, pop when arr[i] >= st.peek() to clear out smaller useless candidates. If you want a smaller element, pop when arr[i] <= st.peek() to clear out bigger useless candidates.Mix these two rules and you derive all four variants instantly without memorizing anything separately.Common Mistakes to AvoidWrong pop condition — Using > instead of >= in the while loop. This causes duplicate values to wrongly answer each other. Always use >= for greater problems and <= for smaller problems inside the while loop.Forgetting to reverse — For right-to-left traversals (NGR and NSR), you collect answers from right to left. You must call Collections.reverse() before returning. Skipping this is the single most common reason for wrong answers on these problems.Not checking empty stack before peek — Always check !st.empty() before calling st.peek(). An empty stack peek throws EmptyStackException at runtime and will crash your solution.Wrong if-condition after the while loop — After the while loop, the if-condition must use strict comparison. For NGR use st.peek() > arr[i]. For NSR use st.peek() < arr[i]. These must be strict — no equals sign here.Confusing traversal direction with answer direction — You traverse right to left for NGR but the answer array is filled left to right. The reverse at the end handles this. Do not try to index directly into the result array to compensate — just use reverse.Time and Space ComplexityAll four problems run in O(n) time and use O(n) space.Even though there is a while loop nested inside the for loop, each element is pushed into the stack exactly once and popped from the stack at most once. So across the entire traversal, the total number of push and pop operations combined is at most 2n — which gives O(n) overall. This is the beauty of the monotonic stack.Why These Four Problems Matter Beyond GFGThese four patterns are not just textbook exercises. They appear as the hidden sub-problem inside some of the hardest stack questions:-Largest Rectangle in Histogram uses NSR and NSL to find the left and right boundaries of each bar.Trapping Rain Water uses NGR and NGL to determine the water level above each position.Stock Span Problem is literally NGL applied directly to stock prices.Sum of Subarray Minimums uses NSR and NSL together to count contributions of each element.Once you master these four patterns deeply, a whole family of hard problems that previously seemed unapproachable suddenly becomes a matter of recognizing the pattern and applying it.Also on This BlogIf you are building your stack foundation from scratch, check out the complete deep-dive here → Stack Data Structure in Java: The Complete Guide — covering everything from what a stack is, LIFO principle, all three implementations, every operation with code, and six practice problems.

MonotonicStackNextGreaterElementStackProblemsJavaGeeksForGeeksStackPattern
LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/Understanding the ProblemYou are given an array prices where prices[i] is the stock price on day i. You can buy on one day and sell on a later day. You want to maximize your profit.Goal: Return the maximum profit possible. If no profit can be made, return 0.Key Rules:You must buy before you sell — no going back in time.You can only make one transaction (one buy, one sell).If prices only go down, profit is impossible — return 0.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁴Understanding the Problem With a Real-World AnalogyImagine you're watching a stock ticker for a week. You want to pick the single best day to buy and a later day to sell. You can't predict the future — you just have historical prices. The question is: looking back at all the prices, what was the best single buy-sell pair you could have chosen?Key Observation (Before Writing a Single Line of Code)The profit on any given pair of days is:profit = prices[sell_day] - prices[buy_day]To maximize profit, for every potential sell day, we want the lowest possible buy price seen so far to the left of it. This is the core insight that drives the efficient solution.If at any point the current price is lower than our tracked minimum, there is no point keeping the old minimum — the new one is strictly better for any future sell day.Intuition — The Two Pointer ApproachWe use two pointers i (buy pointer) and j (sell pointer), both starting at the beginning with i = 0 and j = 1.At every step we ask: is prices[j] greater than prices[i]?If yes → this is a profitable pair. Compute the profit and update the maximum if it's better.If no → prices[j] is cheaper than prices[i]. There's no reason to keep i where it is — any future sell day would be better served by buying at j instead. So we move i to j.Either way, j always moves forward by 1 until it reaches the end of the array.Why Moving i to j is CorrectThis is the most important conceptual point. When prices[j] < prices[i], you might wonder — why not just skip j and move on? Why move i?Because for any future day k > j, the profit buying at j is:prices[k] - prices[j]And the profit buying at i is:prices[k] - prices[i]Since prices[j] < prices[i], buying at j will always give a higher or equal profit for the same sell day k. So we update i = j — there is no scenario where keeping the old i is better.Dry Run — Example 1 (Step by Step)Input: prices = [7, 1, 5, 3, 6, 4]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 17 > 1 → prices[j] is cheaper. Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 1, prices[j] = 51 < 5 → Profitable! profit = 5 - 1 = 4. 4 > 0 → update maxP = 4. j moves to 3.Step 3: i = 1, j = 3 → prices[i] = 1, prices[j] = 31 < 3 → Profitable! profit = 3 - 1 = 2. 2 < 4 → maxP stays 4. j moves to 4.Step 4: i = 1, j = 4 → prices[i] = 1, prices[j] = 61 < 6 → Profitable! profit = 6 - 1 = 5. 5 > 4 → update maxP = 5. j moves to 5.Step 5: i = 1, j = 5 → prices[i] = 1, prices[j] = 41 < 4 → Profitable! profit = 4 - 1 = 3. 3 < 5 → maxP stays 5. j moves to 6. j = 6 = prices.length → loop ends.Output: maxP = 5 ✅ (Buy at day 2 price=1, sell at day 5 price=6)Dry Run — Example 2 (Step by Step)Input: prices = [7, 6, 4, 3, 1]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 67 > 6 → Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 6, prices[j] = 46 > 4 → Move i to j. i = 2, j moves to 3. maxP = 0.Step 3: i = 2, j = 3 → prices[i] = 4, prices[j] = 34 > 3 → Move i to j. i = 3, j moves to 4. maxP = 0.Step 4: i = 3, j = 4 → prices[i] = 3, prices[j] = 13 > 1 → Move i to j. i = 4, j moves to 5. j = 5 = prices.length → loop ends.Output: maxP = 0 ✅ (Prices only went down, no profitable transaction exists)The Code — Implementationclass Solution {public int maxProfit(int[] prices) {int i = 0; // Buy pointer — points to the current minimum price dayint j = 1; // Sell pointer — scans forward through the arrayint maxP = 0; // Tracks the maximum profit seen so far// j scans from day 1 to the end of the arraywhile (i < j && j < prices.length) {if (prices[i] > prices[j]) {// prices[j] is cheaper than current buy price// Move buy pointer to j — buying here is strictly better// for any future sell dayi = j;} else {// prices[j] > prices[i] — this is a profitable pair// Calculate profit and update maxP if it's the best so farint profit = prices[j] - prices[i];if (profit > maxP) {maxP = profit;}}// Always move the sell pointer forwardj++;}// If no profitable transaction was found, maxP remains 0return maxP;}}Code Walkthrough — Step by StepInitialization: i = 0 acts as our buy day pointer (always pointing to the lowest price seen so far). j = 1 is our sell day pointer scanning forward. maxP = 0 is our answer — defaulting to 0 handles the case where no profit is possible.The loop condition: i < j ensures we never sell before buying. j < prices.length ensures we don't go out of bounds.When prices[i] > prices[j]: The current sell day price is cheaper than our buy day. We update i = j, because buying at j dominates buying at i for all future sell days.When prices[j] >= prices[i]: We have a potential profit. We compute it and update maxP if it beats the current best.j always increments: Regardless of which branch we take, j++ moves the sell pointer forward every iteration — this is what drives the loop to completion.Common Mistakes to AvoidNot returning 0 for no-profit cases: If prices are strictly decreasing, maxP never gets updated and stays 0. The initialization maxP = 0 handles this correctly — never initialize it to a negative number.Using a nested loop (brute force): A common first instinct is two nested loops checking every pair. That is O(N²) and will TLE on large inputs. The two pointer approach solves it in O(N).Thinking you need to sort: Sorting destroys the order of days, which is fundamental to the problem. Never sort the prices array here.Moving i forward by 1 instead of to j: When prices[j] < prices[i], some people write i++ instead of i = j. This is wrong — you might miss the new minimum entirely and waste iterations.Complexity AnalysisTime Complexity: O(N) The j pointer traverses the array exactly once from index 1 to the end. The i pointer only moves forward and never exceeds j. So the entire algorithm is a single linear pass — O(N).Space Complexity: O(1) No extra arrays, maps, or stacks are used. Only three integer variables — i, j, and maxP.Alternative Way to Think About It (Min Tracking)Some people write this problem using a minPrice variable instead of two pointers. Both approaches are equivalent — the two pointer framing is slightly more intuitive visually, but here is the min-tracking version for reference:int minPrice = Integer.MAX_VALUE;int maxProfit = 0;for (int price : prices) {if (price < minPrice) {minPrice = price;} else if (price - minPrice > maxProfit) {maxProfit = price - minPrice;}}return maxProfit;The logic is the same — always track the cheapest buy day seen so far, and for each day compute what profit would look like if you sold today.Similar Problems (Build on This Foundation)LeetCode 122 — Best Time to Buy and Sell Stock II (multiple transactions allowed) [ Blog is also avaliable on this - Read Now ]LeetCode 123 — Best Time to Buy and Sell Stock III (at most 2 transactions) [ Blog is also avaliable on this - Read Now ]LeetCode 188 — Best Time to Buy and Sell Stock IV (at most k transactions)LeetCode 309 — Best Time to Buy and Sell Stock with CooldownLeetCode 714 — Best Time to Buy and Sell Stock with Transaction FeeLeetCode 121 is the foundation. Master this one deeply before moving to the others — they all build on the same core idea.Key Takeaways✅ For every potential sell day, the best buy day is always the minimum price seen to its left — this drives the whole approach.✅ When the sell pointer finds a cheaper price than the buy pointer, always move the buy pointer there — it strictly dominates the old buy day for all future sells.✅ Initialize maxP = 0 so that no-profit scenarios (prices only going down) are handled automatically.✅ The two pointer approach solves this in a single linear pass — O(N) time and O(1) space.✅ j always increments every iteration — i only moves when a cheaper price is found.Happy Coding! If this clicked for you, the entire Stock series on LeetCode will feel much more approachable.

LeetCodeTwo PointersEasyJavaDSA
Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
LeetCode 1047: Remove All Adjacent Duplicates In String — Java Solution With All Approaches Explained

LeetCode 1047: Remove All Adjacent Duplicates In String — Java Solution With All Approaches Explained

Introduction: What Is LeetCode 1047 Remove All Adjacent Duplicates In String?If you are grinding LeetCode for coding interviews at companies like Google, Amazon, or Microsoft, LeetCode 1047 Remove All Adjacent Duplicates In String is a problem you cannot skip. It is one of the most elegant examples of the stack simulation pattern and appears frequently as a warmup or follow-up question in technical rounds.In this article we will cover everything you need — plain English explanation, real life analogy, 3 Java approaches with dry runs, complexity analysis, common mistakes, FAQs, and similar problems to practice next.Here is the problem link-: Leetcode 1047 What Is the Problem Really Asking?You are given a string. Keep scanning it and whenever you find two same letters sitting next to each other, remove both of them. After removing, the letters around them might now become adjacent and form a new pair — so you keep doing this until no more adjacent duplicates exist.Example walkthrough for "abbaca":"abbaca" → bb are adjacent duplicates → remove → "aaca""aaca" → aa are adjacent duplicates → remove → "ca""ca" → no adjacent duplicates → done!✅ Output: "ca"Real Life Analogy — Think of Popping BubblesImagine a row of colored bubbles. Whenever two bubbles of the same color are next to each other, they pop and disappear. After they pop, the bubbles on either side might now touch each other — and if they are the same color, they pop too! You keep going until no two same-colored bubbles are touching.That chain reaction is exactly what this problem simulates. And the best tool to handle that chain reaction? A stack.Approach 1: Brute Force (Beginner Friendly)The IdeaScan the string repeatedly. Every time you find two adjacent equal characters, remove them. Keep doing this until a full pass finds nothing to remove.public String removeDuplicates(String s) { StringBuilder sb = new StringBuilder(s); boolean found = true; while (found) { found = false; for (int i = 0; i < sb.length() - 1; i++) { if (sb.charAt(i) == sb.charAt(i + 1)) { sb.deleteCharAt(i); sb.deleteCharAt(i); found = true; break; } } } return sb.toString();}This is easy to understand but very slow. For each pair found, you restart the entire scan. With n up to 100,000, this will get Time Limit Exceeded on LeetCode. Use it only to build intuition.Time Complexity: O(n²) — repeated passes over the string Space Complexity: O(n) — StringBuilder storageApproach 2: Stack Based Solution (Classic Interview Approach)The IdeaA stack is perfect here because of one key observation — when you remove a pair, the character that was before the pair is now adjacent to the character after the pair. That is a Last In First Out situation, which is exactly what a stack handles naturally.Algorithm:If the current character matches the top of the stack → pop (they cancel each other)Otherwise → push the current character onto the stackAt the end, the stack contains your final answerpublic String removeDuplicates(String s) { Stack<Character> st = new Stack<>(); StringBuilder sb = new StringBuilder(); for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (!st.empty() && c == st.peek()) { st.pop(); // adjacent duplicate found, cancel both } else { st.push(c); } } while (!st.empty()) { sb.append(st.pop()); } return sb.reverse().toString();}Dry Run — "abbaca"We go character by character and check against the top of the stack:a → stack empty, push → stack: [a]b → top is a, not equal, push → stack: [a, b]b → top is b, equal! pop → stack: [a]a → top is a, equal! pop → stack: []c → stack empty, push → stack: [c]a → top is c, not equal, push → stack: [c, a]Stack remaining: [c, a] → reverse → ✅ "ca"Notice how after removing bb, the two as automatically become adjacent and get caught — the stack handles this chain reaction naturally without any extra logic!Time Complexity: O(n) — single pass through the string Space Complexity: O(n) — stack holds up to n charactersApproach 3: StringBuilder as Stack (Optimal Solution) ✅The IdeaThis is your own solution and the best one! Instead of using a separate Stack<Character>, we use StringBuilder itself as a stack:sb.append(c) acts as pushsb.deleteCharAt(sb.length() - 1) acts as popsb.charAt(sb.length() - 1) acts as peekNo extra data structure, no boxing of char into Character objects, and no reversal needed at the end. Clean, fast, and minimal.public String removeDuplicates(String s) { StringBuilder sb = new StringBuilder(); for (int i = 0; i < s.length(); i++) { char c = s.charAt(i); if (sb.length() != 0 && c == sb.charAt(sb.length() - 1)) { sb.deleteCharAt(sb.length() - 1); // adjacent duplicate, remove both } else { sb.append(c); } } return sb.toString();}Dry Run — "azxxzy"a → sb empty, append → "a"z → last char is a, not equal, append → "az"x → last char is z, not equal, append → "azx"x → last char is x, equal! delete → "az"z → last char is z, equal! delete → "a"y → last char is a, not equal, append → "ay"✅ Final Answer: "ay"Again, notice the chain reaction — after xx was removed, z and z became adjacent and got removed too. The StringBuilder handles this perfectly in a single pass!Time Complexity: O(n) — one pass, every character processed exactly once Space Complexity: O(n) — StringBuilder storageWhy StringBuilder Beats Stack in JavaWhen you use Stack<Character> in Java, every char primitive gets auto-boxed into a Character object. That means extra memory allocation for every single character. With StringBuilder, you work directly on the underlying char array — faster and leaner. Plus you skip the reversal step entirely.For an interview, the Stack approach is great for explaining your thought process clearly. But for the final submitted solution, StringBuilder is the way to go.Common Mistakes to AvoidNot checking sb.length() != 0 before peeking If the StringBuilder is empty and you call sb.charAt(sb.length() - 1), you will get a StringIndexOutOfBoundsException. Always guard this check — even if the problem guarantees valid input, it shows clean coding habits.Thinking you need multiple passes Many beginners think you need to scan the string multiple times because of chain reactions. The stack handles chain reactions automatically in a single pass. Trust the process!Forgetting to reverse when using Stack Since a stack gives you characters in reverse order when you pop them, you must call .reverse() at the end. With StringBuilder you do not need this.How This Fits Into the Stack Simulation PatternBy now you might be noticing a theme across multiple problems:LeetCode 3174 Clear Digits — digit acts as backspace, deletes closest left non-digit LeetCode 2390 Removing Stars — star acts as backspace, deletes closest left non-star LeetCode 1047 Remove Adjacent Duplicates — character cancels itself if it matches the top of stackAll three use the exact same StringBuilder-as-stack pattern. The only difference is the condition that triggers a deletion. This is why pattern recognition is the real skill — once you internalize this pattern, you can solve a whole family of problems in minutes.FAQs — People Also AskQ1. What is the best approach for LeetCode 1047 in Java? The StringBuilder approach is the best. It runs in O(n) time, uses O(n) space, requires no extra data structure, and avoids the reversal step needed with a Stack.Q2. Why does a stack work for removing adjacent duplicates? Because whenever you remove a pair, the characters around them become the new neighbors. A stack naturally keeps track of the most recently seen character, so it catches these chain reactions without any extra logic.Q3. What is the time complexity of LeetCode 1047? The optimal solution runs in O(n) time and O(n) space, where n is the length of the input string.Q4. Is LeetCode 1047 asked in coding interviews? Yes, it is commonly asked as a warmup problem or follow-up at companies like Google, Amazon, and Adobe. It tests your understanding of stack-based string manipulation.Q5. What is the difference between LeetCode 1047 and LeetCode 1209? LeetCode 1047 removes pairs of adjacent duplicates. LeetCode 1209 is the harder version — it removes groups of k adjacent duplicates, requiring you to store counts alongside characters in the stack.Similar LeetCode Problems to Practice Next2390. Removing Stars From a String — Medium — star as backspace3174. Clear Digits — Easy — digit as backspace844. Backspace String Compare — Easy — compare two strings after backspaces1209. Remove All Adjacent Duplicates in String II — Medium — harder version with k duplicates735. Asteroid Collision — Medium — stack simulation with collision logicConclusionLeetCode 1047 Remove All Adjacent Duplicates In String is a beautiful problem that teaches you one of the most powerful and reusable patterns in DSA — stack simulation. The moment you spot that a removal can cause a chain reaction of more removals, you know a stack is your best friend.The StringBuilder solution is clean, optimal, and interview-ready. Master it, understand why it works, and you will be able to tackle the entire family of stack simulation problems with confidence.Found this helpful? Share it with friends preparing for coding interviews

LeetCodeJavaStackStringEasy
Ai Assistant Kas