Search Blogs

Showing results for "Index Mapping"

Found 16 results

Understanding HashMap and Hashing Techniques: A Complete Guide

Understanding HashMap and Hashing Techniques: A Complete Guide

What is HashMap?HashMap is a non-linear data structure that lives inside Java's collection framework, alongside other structures like Trees and Graphs (see the generated image above). What makes it special? It uses a clever technique called hashing to store and retrieve data at blazing speeds.Think of hashing as a smart address system. It converts large keys (like long strings or big numbers) into smaller values that work as array indices. This simple trick transforms your data lookup from slow O(n)O(n) or O(log⁡n)O(logn) operations into lightning-fast O(1)O(1) access times!Understanding Hashing MappingsBefore diving deeper, let's understand the four types of hashing mappings:One-to-one mapping — Each key maps to exactly one indexMany-to-one mapping — Multiple keys can map to the same indexOne-to-many mapping — One key maps to multiple indicesMany-to-many mapping — Multiple keys map to multiple indicesHashMap primarily uses one-to-one and many-to-one mapping strategies to achieve its performance goals.The Problem: Why Not Just Use Arrays?Let's say you want to store 7 elements with keys: 1, 5, 23, 57, 234, 678, and 1000.Using direct indexing (where key = array index), you'd need an array of size 1000 just to store 7 items! That leaves 993 empty slots wasting precious memory. Imagine having a parking lot with 1000 spaces for just 7 cars — totally inefficient!This is the exact problem hash functions solve.Hash Functions: The SolutionA hash function takes your key and calculates which array index to use. The beauty? You can now store those same 7 elements in an array of just size 10!The most common hash function uses the modulus operator:hash(key)=keymod array sizehash(key)=keymodarray sizeExample: With array size = 10:Key 57 → Index: 57mod 10=757mod10=7Key 234 → Index: 234mod 10=4234mod10=4Key 1000 → Index: 1000mod 10=01000mod10=0Now you only need an array of size 10 instead of 1000. Problem solved!Three Popular Hash Function TypesModulus MethodThe most widely used technique. Divide the key by array size (or a prime number) and use the remainder as the index.index=keymod sizeindex=keymodsizeMid Square MethodSquare the key value, then extract the middle digits to form your hash value.Example: Key = 57 → 572=3249572=3249 → Extract middle digits "24"Folding HashingDivide the key into equal parts, add them together, then apply modulus.Example: Key = 123456Split into: 12, 34, 56Sum: 12+34+56=10212+34+56=102Apply modulus: 102mod 10=2102mod10=2The Collision ProblemHere's the catch: sometimes two different keys produce the same index. This is called a collision.Example:Key 27 → 27mod 10=727mod10=7Key 57 → 57mod 10=757mod10=7Both keys want index 7! We need smart strategies to handle this.Collision Resolution: Two Main TechniquesTechnique 1: Open Chaining (Separate Chaining)Mapping Type: Many-to-oneWhen a collision happens, create a linked list at that index and store all colliding values in sorted order.Example: Keys 22 and 32 both map to index 2:Index 2: [22] → [32] → nullAdvantage: Simple and works well in practiceDrawback: If too many collisions occur, the linked list becomes long and searching degrades to O(n)O(n). However, Java's collection framework uses highly optimized hash functions that achieve amortized O(1)O(1) time complexity.Technique 2: Closed Addressing (Open Addressing)Mapping Type: One-to-manyInstead of creating a list, find another empty slot in the array. There are three approaches:Linear ProbingWhen collision occurs, check the next index. If it's occupied, keep checking the next one until you find an empty slot.Hash Function:h(x)=xmod sizeh(x)=xmodsizeOn Collision:h(x)=(h(x)+i)mod sizeh(x)=(h(x)+i)modsizewhere i=0,1,2,3,…i=0,1,2,3,…Example: Keys 22 and 32 both map to index 2:Store 22 at index 2Try 32 at index 2 → occupiedCheck index 3 → empty, store 32 thereProblems:Clustering: Keys bunch together in adjacent indices, slowing down searchesDeleted Slot Problem: When you delete a key, it creates an empty slot that breaks the search chainThe Deleted Slot Problem Explained:Keys 22 and 32 are at indices 2 and 3Delete key 22 from index 2Now when searching for key 32, the algorithm reaches index 2 (empty), thinks key 32 doesn't exist, and stops searching!Solution: Mark deleted slots with a special marker (like -1) or perform rehashing after deletions.Quadratic ProbingSame concept as linear probing, but instead of checking consecutive slots, jump in quadratic increments: 1, 4, 9, 16, 25...Hash Function:h(x)=xmod sizeh(x)=xmodsizeOn Collision:h(x)=(h(x)+i2)mod sizeh(x)=(h(x)+i2)modsizewhere i=0,1,2,3,…i=0,1,2,3,…Advantage: Significantly reduces clusteringDrawback: Still suffers from the deleted slot problemDouble HashingThe most sophisticated approach! Uses two different hash functions to create unique probe sequences for each key.First Hash Function:h1(x)=xmod sizeh1(x)=xmodsizeSecond Hash Function:h2(x)=prime−(xmod prime)h2(x)=prime−(xmodprime)On Collision:h(x)=(h1(x)+i×h2(x))mod sizeh(x)=(h1(x)+i×h2(x))modsizewhere i=0,1,2,3,…i=0,1,2,3,…Advantage: Effectively avoids clustering and provides the best performance among probing techniquesLoad Factor: The Performance IndicatorThe load factor tells you how full your hash table is:Load Factor=Number of elementsArray sizeLoad Factor=Array sizeNumber of elementsPerformance GuidelinesLoad Factor ≤ 0.5 → Good performance, fast operations Load Factor > 0.7 → Poor performance, time to rehash ExamplesBad Load Factor:Array size = 10, Elements = 8Load Factor = 8/10=0.88/10=0.8Action: Rehash the array (typically double the size)Good Load Factor:Array size = 200, Elements = 100Load Factor = 100/200=0.5100/200=0.5Action: No changes needed, performing optimallyWhat is Rehashing?When the load factor exceeds the threshold (usually 0.7), the hash table is resized — typically doubled in size. All existing elements are then rehashed and redistributed into the new larger array. This maintains optimal performance as your data grows.Performance SummaryHashMap delivers exceptional average-case performance:Insertion: O(1)O(1)Deletion: O(1)O(1)Search: O(1)O(1)However, in the worst case (many collisions with poor hash functions), operations can degrade to O(n)O(n).The key to maintaining O(1)O(1) performance:Use a good hash functionChoose an appropriate collision resolution strategyMonitor and maintain a healthy load factorRehash when necessaryConclusionHashMap is one of the most powerful and frequently used data structures in programming. By understanding hashing techniques, collision resolution strategies, and load factors, you can write more efficient code and make better design decisions.Whether you're building a cache, implementing a frequency counter, or solving complex algorithm problems, HashMap is your go-to tool for fast data access!

hashmapdata-structureshashingjavaalgorithmscollision-resolutiontime-complexity
LeetCode 290: Word Pattern – Multiple Ways to Verify Bijection Between Pattern Characters and Words

LeetCode 290: Word Pattern – Multiple Ways to Verify Bijection Between Pattern Characters and Words

Try the ProblemYou can practice the problem here:https://leetcode.com/problems/word-pattern/Problem DescriptionYou are given:A string patternA string sThe string s consists of words separated by spaces.Your task is to determine whether s follows the same pattern defined by the string pattern.To follow the pattern, there must be a bijection between pattern characters and words.Understanding the Meaning of BijectionA bijection means a one-to-one relationship between two sets.For this problem it means:Every character in the pattern maps to exactly one word.Every word maps to exactly one pattern character.Two characters cannot map to the same word.Two words cannot map to the same character.This ensures the mapping is unique in both directions.Example WalkthroughExample 1Inputpattern = "abba"s = "dog cat cat dog"OutputtrueExplanationMapping:a → dogb → catPattern:a b b aWords:dog cat cat dogThe mapping is perfectly consistent.Example 2Inputpattern = "abba"s = "dog cat cat fish"OutputfalseExplanationThe last word "fish" breaks the pattern because "a" was already mapped to "dog".Example 3Inputpattern = "aaaa"s = "dog cat cat dog"OutputfalseExplanationPattern requires:a → same word every timeBut the words differ.Constraints1 <= pattern.length <= 300pattern contains lowercase English letters1 <= s.length <= 3000s contains lowercase letters and spaceswords are separated by a single spaceno leading or trailing spacesCore ObservationBefore applying any algorithm, we should verify one simple condition.If the number of pattern characters does not match the number of words, then the pattern cannot match.Example:pattern = "abba"s = "dog cat dog"Pattern length = 4Words = 3This automatically returns false.Thinking About Possible ApproachesWhile solving this problem, several strategies may come to mind:Use one HashMap to store pattern → word mapping and check duplicates manually.Use two HashMaps to enforce bijection from both directions.Track first occurrence indices of pattern characters and words.Compare mapping patterns using arrays or maps.Let’s explore these approaches one by one.Approach 1: Single HashMap (Your Solution)IdeaWe store mapping:pattern character → wordWhile iterating:If the character already exists in the map→ check if it maps to the same word.If it does not exist→ ensure that no other character already maps to that word.This ensures bijection.Java Implementationclass Solution { public boolean wordPattern(String pattern, String s) { String words[] = s.split(" "); // Length mismatch means impossible mapping if(pattern.length() != words.length) return false; HashMap<Character,String> mp = new HashMap<>(); for(int i = 0; i < words.length; i++){ char ch = pattern.charAt(i); if(mp.containsKey(ch)){ if(!mp.get(ch).equals(words[i])){ return false; } }else{ // Prevent two characters mapping to same word if(mp.containsValue(words[i])){ return false; } } mp.put(ch, words[i]); } return true; }}Time ComplexityO(n²)Because containsValue() can take O(n) time in worst case.Space ComplexityO(n)Approach 2: Two HashMaps (Cleaner Bijection Enforcement)IdeaInstead of checking containsValue, we maintain two maps.pattern → wordword → patternThis ensures bijection naturally.Java Implementationclass Solution { public boolean wordPattern(String pattern, String s) { String[] words = s.split(" "); if(pattern.length() != words.length) return false; HashMap<Character, String> map1 = new HashMap<>(); HashMap<String, Character> map2 = new HashMap<>(); for(int i = 0; i < pattern.length(); i++){ char ch = pattern.charAt(i); String word = words[i]; if(map1.containsKey(ch) && !map1.get(ch).equals(word)) return false; if(map2.containsKey(word) && map2.get(word) != ch) return false; map1.put(ch, word); map2.put(word, ch); } return true; }}Time ComplexityO(n)Because both maps allow constant-time lookups.Space ComplexityO(n)Approach 3: Index Mapping Technique (Elegant Trick)IdeaAnother clever technique is to compare first occurrence indices.Example:pattern = abbawords = dog cat cat dogPattern index sequence:a → 0b → 1b → 1a → 0Words index sequence:dog → 0cat → 1cat → 1dog → 0If the index sequences match, then the pattern holds.Java Implementationclass Solution { public boolean wordPattern(String pattern, String s) { String[] words = s.split(" "); if(pattern.length() != words.length) return false; HashMap<Character,Integer> map1 = new HashMap<>(); HashMap<String,Integer> map2 = new HashMap<>(); for(int i = 0; i < pattern.length(); i++){ char ch = pattern.charAt(i); String word = words[i]; if(!map1.getOrDefault(ch, -1) .equals(map2.getOrDefault(word, -1))){ return false; } map1.put(ch, i); map2.put(word, i); } return true; }}Time ComplexityO(n)Space ComplexityO(n)Comparison of ApproachesApproachTime ComplexitySpace ComplexityNotesSingle HashMapO(n²)O(n)Simple but slowerTwo HashMapsO(n)O(n)Cleaner bijection enforcementIndex MappingO(n)O(n)Elegant and interview-friendlyKey Interview InsightProblems like this test your understanding of:Bijection MappingSimilar problems include:Isomorphic StringsWord Pattern IISubstitution Cipher ProblemsUnderstanding how to enforce one-to-one mappings is a powerful technique.ConclusionThe Word Pattern problem is an excellent example of how HashMaps can enforce relationships between two datasets.While a simple HashMap solution works, more optimized methods like two-way mapping or index comparison provide cleaner and more efficient implementations.Mastering these techniques will help you solve many pattern-matching and mapping problems commonly asked in coding interviews.

HashMapString ManipulationBijection MappingTwo HashMapsIndex MappingLeetCode Easy
LeetCode 3761 Minimum Absolute Distance Between Mirror Pairs | Java HashMap Solution

LeetCode 3761 Minimum Absolute Distance Between Mirror Pairs | Java HashMap Solution

IntroductionSome problems look simple at first—but hide a clever trick inside.LeetCode 3761 – Minimum Absolute Distance Between Mirror Pairs is one such problem. It combines:Number manipulation (digit reversal)HashingEfficient searchingIf approached naively, this problem can easily lead to O(n²) time complexity—which is not feasible for large inputs.In this article, we will walk through:Problem intuitionNaive approach (and why it fails)Optimized HashMap solutionStep-by-step explanationClean Java code with comments🔗 Problem LinkLeetCode: Minimum Absolute Distance Between Mirror PairsTo gain a deeper understanding of the problem, it is highly recommended that you review this similar problem Closest Equal Element Queries here is the link of the article. Both cases follow a nearly identical pattern, and studying the initial example will provide valuable context for the current task.Problem StatementYou are given an integer array nums.A mirror pair (i, j) satisfies:0 ≤ i < j < nums.lengthreverse(nums[i]) == nums[j]👉 Your task is to find:The minimum absolute distance between such pairsIf no mirror pair exists, return -1.ExamplesExample 1Input:nums = [12, 21, 45, 33, 54]Output:1Explanation:(0,1) → reverse(12) = 21 → distance = 1(2,4) → reverse(45) = 54 → distance = 2✔ Minimum = 1Example 2Input:nums = [120, 21]Output:1Example 3Input:nums = [21, 120]Output:-1Key InsightThe core idea is:Instead of checking every pair,store reversed values and match on the fly.❌ Naive Approach (Brute Force)IdeaCheck all pairs (i, j)Reverse nums[i]Compare with nums[j]ComplexityTime: O(n²) ❌Space: O(1)ProblemWith n ≤ 100000, this approach will definitely cause TLE.✅ Optimized Approach (HashMap)IntuitionWhile iterating through the array:Reverse the current numberCheck if this number was already seen as a reversed valueIf yes → we found a mirror pairKey TrickInstead of storing original numbers:👉 Store reversed values as keysThis allows instant lookup.Java Code (With Detailed Comments)import java.util.*;class Solution {// Function to reverse digits of a numberpublic int reverse(int m) {int rev = 0;while (m != 0) {int dig = m % 10; // extract last digitm = m / 10; // remove last digitrev = rev * 10 + dig; // build reversed number}return rev;}public int minMirrorPairDistance(int[] nums) {// Map to store reversed values and their indicesHashMap<Integer, Integer> mp = new HashMap<>();int min = Integer.MAX_VALUE;for (int i = 0; i < nums.length; i++) {// Check if current number exists in map// Meaning: some previous number had reverse equal to thisif (mp.containsKey(nums[i])) {// Calculate distanceint prevIndex = mp.get(nums[i]);min = Math.min(min, Math.abs(i - prevIndex));}// Reverse current numberint revVal = reverse(nums[i]);// Store reversed value with indexmp.put(revVal, i);}// If no pair found, return -1return min == Integer.MAX_VALUE ? -1 : min;}}Step-by-Step Dry RunInput:nums = [12, 21, 45, 33, 54]Execution:IndexValueReverseMap CheckAction01221not foundstore (21 → 0)12112founddistance = 124554not foundstore (54 → 2)33333not foundstore (33 → 3)45445founddistance = 2👉 Minimum = 1Complexity AnalysisTime ComplexityReversing number → O(digits) ≈ O(log n)Loop → O(n)👉 Overall: O(n)Space ComplexityHashMap stores at most n elements👉 O(n)Why This Approach WorksAvoids unnecessary pair comparisonsUses hashing for constant-time lookupProcesses array in a single passKey TakeawaysAlways think of hashing when matching conditions existReversing numbers can convert the problem into a lookup problemAvoid brute force when constraints are largeThis is a classic “store & check” patternCommon Interview PatternThis problem is similar to:Two Sum (hashing)Reverse pairsMatching transformationsConclusionThe Minimum Absolute Distance Between Mirror Pairs problem is a great example of how a simple optimization (using a HashMap) can reduce complexity from O(n²) → O(n).Understanding this pattern will help you solve many similar problems involving:TransformationsMatching conditionsEfficient lookupsFrequently Asked Questions (FAQs)1. Why store reversed value instead of original?Because we want to quickly check if a number matches the reverse of a previous number.2. What if multiple same reversed values exist?The map stores the latest index, ensuring minimum distance is considered.3. Can this be solved without HashMap?Yes, but it will result in inefficient O(n²) time.

LeetCodeArrayJavaMediumHashMap
Reverse Vowels of a String – From Extra Space Approach to Two Pointer Optimization (LeetCode 345)

Reverse Vowels of a String – From Extra Space Approach to Two Pointer Optimization (LeetCode 345)

🔗 Problem LinkLeetCode 345 – Reverse Vowels of a String 👉 https://leetcode.com/problems/reverse-vowels-of-a-string/IntroductionThis problem is very similar to Reverse Only Letters, but with a small twist:Instead of reversing all letters, we only reverse the vowels.At first, we might think of extracting vowels, reversing them, and putting them back. That works — but it is not the most optimal approach.In this article, we’ll:Understand the brute-force style approach (your solution)Analyze its time complexityOptimize it using the Two Pointer patternCompare both approaches📌 Problem UnderstandingYou are given a string s.You must:Reverse only the vowelsKeep all other characters in the same positionVowels include:a, e, i, o, uA, E, I, O, UExample 1Input: "IceCreAm"Output: "AceCreIm"Vowels: ['I','e','e','A'] Reversed: ['A','e','e','I']Example 2Input: "leetcode"Output: "leotcede"🧠 Your First Approach – Extract, Reverse, ReplaceYour idea:Extract all vowels into a string.Store their indices.Reverse the vowel string.Replace vowels at stored indices.Let’s look at your code.💻 Your Code (Extract & Replace Method)class Solution { public String reverseVowels(String s) { String vow = ""; List<Integer> lis = new ArrayList<>(); HashMap<Integer,Character> mp = new HashMap<>(); for(int i =0;i<s.length();i++){ if(((s.charAt(i) == 'a') || (s.charAt(i) == 'e') || (s.charAt(i) == 'i') || (s.charAt(i) == 'o') || (s.charAt(i) == 'u') || (s.charAt(i) == 'A') || (s.charAt(i) == 'E') || (s.charAt(i) == 'I') || (s.charAt(i) == 'O') || (s.charAt(i) == 'U')) ){ vow += s.charAt(i); lis.add(i); } } String so = ""; for(int i = vow.length()-1; i >= 0; i--){ so += vow.charAt(i); } for(int i =0; i< lis.size();i++){ mp.put(lis.get(i), so.charAt(i)); } String ans = ""; for(int i =0 ; i< s.length();i++){ if(mp.containsKey(i)){ ans += mp.get(i); }else{ ans += s.charAt(i); } } return ans; }}🔍 How This WorksStep 1 – Extract VowelsStore:Vowel characters in vowTheir indices in lisStep 2 – Reverse the Vowel StringCreate new reversed string so.Step 3 – Map Indices to Reversed VowelsUse a HashMap to store:index → reversed vowelStep 4 – Build Final StringTraverse original string:If index in map → use reversed vowelElse → use original character⚠️ Problem with This ApproachAlthough correct, it has inefficiencies:String concatenation (+=) → O(n²) in worst caseExtra space used:Vowel stringList of indicesHashMapFinal answer stringTime Complexity: O(n²) (due to string concatenation) Space Complexity: O(n)We can do better.🚀 Optimized Approach – Two Pointers (Best Solution)Instead of extracting vowels separately, we can:Convert string into char arrayUse two pointersSwap vowels directlyThis avoids extra structures.💻 Optimized Two Pointer Solutionclass Solution { public String reverseVowels(String s) { int i = 0, j = s.length() - 1; char[] arr = s.toCharArray(); while(i < j){ if(!isVowel(arr[i])){ i++; } else if(!isVowel(arr[j])){ j--; } else{ char temp = arr[i]; arr[i] = arr[j]; arr[j] = temp; i++; j--; } } return new String(arr); } private boolean isVowel(char c){ return c=='a'||c=='e'||c=='i'||c=='o'||c=='u'|| c=='A'||c=='E'||c=='I'||c=='O'||c=='U'; }}🎯 Why This WorksWe:Move left pointer until vowel foundMove right pointer until vowel foundSwapContinueNo extra storage needed.⏱ Complexity ComparisonYour ApproachTime: O(n²) (string concatenation)Space: O(n)Two Pointer ApproachTime: O(n)Space: O(n) (char array)Much cleaner and faster.🔥 Key LearningThis problem reinforces:Two pointer patternIn-place modificationAvoiding unnecessary extra spaceRecognizing optimization opportunitiesWhenever you see:"Reverse something but keep other elements fixed"Think:👉 Two Pointers🏁 Final ThoughtsYour approach shows strong logical thinking:Extract → Reverse → ReplaceThat’s a valid way to solve it.But the optimized two-pointer approach is more interview-friendly.If you master this pattern, you can easily solve:Reverse Only LettersReverse StringValid PalindromeRemove Duplicates from Sorted Array

Two PointersString ManipulationHashMapLeetCodeEasy
LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

LeetCode 3488 — Closest Equal Element Queries: A Complete Walkthrough from Brute Force to Optimal

If you have been grinding LeetCode lately, you have probably run into problems where your first clean-looking solution times out and forces you to rethink from scratch. LeetCode 3488 is exactly that kind of problem. This article walks through the complete thought process — from the naive approach that got me TLE, to the intuition shift, to the final optimized solution in Java.You can find the original problem here: LeetCode 3488 — Closest Equal Element Queries at Problem LinkUnderstanding the ProblemYou are given a circular array nums and an array of queries. For each query queries[i], you must find the minimum distance between the element at index queries[i] and any other index j such that nums[j] == nums[queries[i]]. If no such other index exists, the answer is -1.The critical detail here is the word circular. The array wraps around, which means the distance between two indices i and j in an array of length n is not simply |i - j|. It is:min( |i - j| , n - |i - j| )You can travel either clockwise or counterclockwise, and you take whichever path is shorter.Breaking Down the ExamplesExample 1nums = [1, 3, 1, 4, 1, 3, 2], queries = [0, 3, 5]For query index 0, the value is 1. Other indices holding 1 are 2 and 4. Circular distances are min(2, 5) = 2 and min(4, 3) = 3. The minimum is 2.For query index 3, the value is 4. It appears nowhere else in the array. Answer is -1.For query index 5, the value is 3. The other 3 sits at index 1. Circular distance is min(4, 3) = 3. Answer is 3.Output: [2, -1, 3]Example 2nums = [1, 2, 3, 4], queries = [0, 1, 2, 3]Every element is unique. Every query returns -1.Output: [-1, -1, -1, -1]First Attempt — Brute ForceMy first instinct was straightforward. For each query, scan the entire array, collect every index that matches the queried value, compute the circular distance to each, and return the minimum. Clean logic, easy to reason about, and dead simple to implement.while (i != queries.length) { int max = Integer.MAX_VALUE; for (int j = 0; j < nums.length; j++) { int target = nums[queries[i]]; if (nums[j] == target && j != queries[i]) { // Linear distance between the two indices int right = Math.abs(j - queries[i]); // Distance going the other direction around the ring int left = nums.length - right; // True circular distance is the shorter of the two int dist = Math.min(right, left); max = Math.min(max, dist); } } lis.add(max == Integer.MAX_VALUE ? -1 : max); i++;}This got TLE immediately, and once you look at the constraints it is obvious why. Both nums.length and queries.length can be up to 10^5. For every query you are scanning every element, giving you O(n × q) time — which in the worst case is 10 billion operations. No judge is going to wait for that.Rethinking the Approach — Where Is the Waste?After the TLE, the question I asked myself was: what work is being repeated unnecessarily?The answer was obvious in hindsight. Every time a query asks about a value like 3, the brute force scans the entire array again looking for every index that holds 3. If ten different queries all ask about value 3, you are doing that scan ten times. You are finding the same indices over and over.The fix is to do that work exactly once, before any query is processed. You precompute a map from each value to all the indices where it appears. Then for every query you simply look up the relevant list and work within it.That observation reduces the precomputation to O(n) — one pass through the array. The question then becomes: once you have that sorted list of indices for a given value, how do you find the closest one to your query index efficiently?The Key Insight — You Only Need Two NeighboursHere is the insight that makes this problem elegant. The index list for any value is sorted in ascending order because you build it by iterating left to right through the array. If your query index sits at position mid inside that sorted list, then by definition every index to the left of mid - 1 is farther away than arr[mid - 1], and every index to the right of mid + 1 is farther away than arr[mid + 1].This means you never need to compare against all duplicates. You only ever need to check the immediate left and right neighbours of your query index within the sorted list.The one subtlety is the circular wrap. Because the array itself is circular, the left neighbour of the very first element in the list is actually the last element in the list, since you can wrap around the ring. This is handled cleanly with modular arithmetic: (mid - 1 + n) % n for the left neighbour and (mid + 1) % n for the right.The Optimized Solution — HashMap + Binary SearchStep 1 — Precompute the index mapIterate through nums once and build a HashMap mapping each value to a list of all indices where it appears. The lists are sorted by construction since you insert indices in order.Step 2 — Binary search to locate the query indexFor a given query at index q, look up the index list for nums[q]. Binary search the list to find the position of q within it. This runs in O(log n) rather than O(n).Step 3 — Check immediate neighbours and compute circular distancesOnce you have the position mid, fetch arr[(mid + 1) % n] and arr[(mid - 1 + n) % n]. For each, compute the circular distance using min(|diff|, totalLength - |diff|). Return the smaller of the two.Full Annotated Java Solutionclass Solution { public List<Integer> solveQueries(int[] nums, int[] queries) { int c = 0; // Precompute: map each value to the sorted list of indices where it appears. // Since we iterate left to right, the list is sorted by construction. HashMap<Integer, List<Integer>> mp = new HashMap<>(); for (int i = 0; i < nums.length; i++) { mp.computeIfAbsent(nums[i], k -> new ArrayList<>()).add(i); } List<Integer> lis = new ArrayList<>(); while (c != queries.length) { // Retrieve the sorted index list for the value at the queried position List<Integer> arr = mp.get(nums[queries[c]]); int n = arr.size(); int i = 0; int j = n - 1; int min = -1; while (i <= j) { int mid = i + (j - i) / 2; if (arr.get(mid) == queries[c]) { // Only one occurrence in the entire array — no duplicate exists if (n == 1) { min = -1; } else { // Circular neighbour to the right within the index list int right = arr.get((mid + 1) % n); // Circular neighbour to the left within the index list int left = arr.get((mid - 1 + n) % n); // Compute circular distance to the right neighbour int d1 = Math.abs(right - queries[c]); int distRight = Math.min(d1, nums.length - d1); // Compute circular distance to the left neighbour int d2 = Math.abs(left - queries[c]); int distLeft = Math.min(d2, nums.length - d2); // The answer is the closer of the two neighbours min = Math.min(distLeft, distRight); } break; } else if (arr.get(mid) > queries[c]) { // Query index is smaller — search the left half j = mid - 1; } else { // Query index is larger — search the right half i = mid + 1; } } lis.add(min); c++; } return lis; }}Complexity AnalysisTime Complexity: O(n log n)Building the HashMap takes O(n). For each of the q queries, binary search over the index list takes O(log n) in the worst case. Total: O(n + q log n), which simplifies to O(n log n) given the constraint that q ≤ n.Space Complexity: O(n)The HashMap stores every index exactly once across all its lists, so total space used is O(n).Compared to the brute force O(n × q), this is the difference between ~1.7 million operations and ~10 billion operations at the constraint limits.Common PitfallsMixing up the two values of n. Inside the solution, n refers to arr.size() — the number of occurrences of a particular value. But when computing circular distance, you need nums.length — the full array length. These are different numbers and swapping them silently produces wrong answers.Forgetting the + n in the left neighbour formula. Writing (mid - 1) % n when mid is 0 produces -1 in Java, since Java's modulo preserves the sign of the dividend. Always write (mid - 1 + n) % n.Not handling the single-occurrence case. If a value appears only once, n == 1, and the neighbour formula wraps around to the element itself, giving a distance of zero — which is completely wrong. Guard against this explicitly before running the neighbour logic.What This Problem Teaches YouThe journey from brute force to optimal here follows a pattern worth internalizing.The brute force was correct but repeated work. Recognizing that repeated work and lifting it into a precomputation step is the single move that makes this problem tractable. The HashMap does that.Once you have a sorted structure, binary search is almost always the right tool to find a position within it. And once you have a position in a sorted structure, you only ever need to look at adjacent elements to find the nearest one — checking anything further is redundant by definition.These are not tricks specific to this problem. They are transferable patterns that appear across dozens of medium and hard problems on the platform. Internalizing them — rather than memorizing solutions — is what actually builds problem-solving ability over time.

ArraysHashMapBinary SearchCircular ArraysMediumLeetCodeJava
LeetCode 682 Baseball Game - Java Solution Explained

LeetCode 682 Baseball Game - Java Solution Explained

IntroductionLeetCode 682 Baseball Game is one of the cleanest and most beginner-friendly stack simulation problems on LeetCode. It does not require any fancy algorithm or deep insight — it purely tests whether you can carefully read the rules and simulate them faithfully using the right data structure.But do not let "Easy" fool you. This problem is a great place to practice thinking about which data structure fits best and why. We will solve it three different ways — Stack, ArrayList, and Deque — so you can see the tradeoffs and pick the one that makes most sense to you.You can find the problem here — LeetCode 682 Baseball Game.What Is the Problem Really Asking?You are keeping score for a baseball game with four special rules. You process a list of operations one by one and maintain a record of scores. At the end, return the total sum of all scores in the record.The four operations are:A number (like "5" or "-2") — just add that number as a new score to the record."C" — the last score was invalid, remove it from the record."D" — add a new score that is double the most recent score."+" — add a new score that is the sum of the two most recent scores.That is it. Four rules, simulate them in order, sum up what is left.Real Life Analogy — A Scoreboard With CorrectionsImagine a scoreboard operator at a sports event. They write scores on a whiteboard as the game progresses:A player scores 5 points → write 5Another player scores 2 → write 2Referee says last score was invalid → erase the last number (C)Special bonus rule kicks in → double the last valid score (D)Two scores combine → add the last two scores as one entry (+)At the end, add up everything on the whiteboard. The stack is your whiteboard — you write on top and erase from the top.Why Stack Is the Natural FitAll four operations only ever look at or modify the most recently added scores. C removes the last one. D doubles the last one. + uses the last two. This "most recent first" access pattern is the definition of LIFO — Last In First Out — which is exactly what a Stack provides.Any time a problem says "the previous score" or "the last two scores," your brain should immediately think Stack.Approach 1: Stack (Your Solution) ✅The IdeaUse a Stack of integers. For each operation:Number → parse and pushC → pop the topD → peek the top, push doubled value+ → pop top two, push both back, push their sumpublic int calPoints(String[] operations) { Stack<Integer> st = new Stack<>(); for (int i = 0; i < operations.length; i++) { String op = operations[i]; if (op.equals("C")) { st.pop(); // remove last score } else if (op.equals("D")) { st.push(st.peek() * 2); // double of last score } else if (op.equals("+")) { int prev1 = st.pop(); // most recent score int prev2 = st.pop(); // second most recent score int sum = prev1 + prev2; st.push(prev2); // put them back st.push(prev1); st.push(sum); // push the new score } else { st.push(Integer.parseInt(op)); // regular number } } // sum all remaining scores int total = 0; while (!st.empty()) { total += st.pop(); } return total;}One small improvement over your original solution — using op.equals("C") instead of op.charAt(0) == 'C'. This is cleaner and handles edge cases better since negative numbers like "-2" also start with - not a digit, so charAt(0) comparisons can get tricky. Using equals is always safer for string operations.Why the + Operation Needs Pop-Push-PopThe trickiest part is the + operation. You need the two most recent scores. Stack only lets you see the top. So you pop the first, then the second, compute the sum, then push both back before pushing the sum. This restores the record correctly — the previous two scores stay, and the new sum score is added on top.Detailed Dry Run — ops = ["5","2","C","D","+"]Let us trace every step carefully:"5" → number, parse and push Stack: [5]"2" → number, parse and push Stack: [5, 2]"C" → remove last score, pop Stack: [5]"D" → double last score, peek=5, push 10 Stack: [5, 10]"+" → sum of last two:pop prev1 = 10pop prev2 = 5sum = 15push prev2=5, push prev1=10, push sum=15 Stack: [5, 10, 15]Sum all: 5 + 10 + 15 = 30 ✅Detailed Dry Run — ops = ["5","-2","4","C","D","9","+","+"]"5" → push 5. Stack: [5]"-2" → push -2. Stack: [5, -2]"4" → push 4. Stack: [5, -2, 4]"C" → pop 4. Stack: [5, -2]"D" → peek=-2, push -4. Stack: [5, -2, -4]"9" → push 9. Stack: [5, -2, -4, 9]"+" → prev1=9, prev2=-4, sum=5. Push -4, 9, 5. Stack: [5, -2, -4, 9, 5]"+" → prev1=5, prev2=9, sum=14. Push 9, 5, 14. Stack: [5, -2, -4, 9, 5, 14]Sum: 5 + (-2) + (-4) + 9 + 5 + 14 = 27 ✅Approach 2: ArrayList (Most Readable)The IdeaArrayList gives you index-based access which makes the + operation much cleaner — no need to pop and push back. Just access the last two elements directly using size()-1 and size()-2.public int calPoints(String[] operations) { ArrayList<Integer> record = new ArrayList<>(); for (String op : operations) { int n = record.size(); if (op.equals("C")) { record.remove(n - 1); // remove last element } else if (op.equals("D")) { record.add(record.get(n - 1) * 2); // double last } else if (op.equals("+")) { // sum of last two — no need to remove and re-add! record.add(record.get(n - 1) + record.get(n - 2)); } else { record.add(Integer.parseInt(op)); } } int total = 0; for (int score : record) { total += score; } return total;}See how the + operation becomes a single line with ArrayList? record.get(n-1) + record.get(n-2) directly accesses the last two elements without any pop-push gymnastics.Dry Run — ops = ["5","2","C","D","+"]"5" → add 5. List: [5]"2" → add 2. List: [5, 2]"C" → remove last. List: [5]"D" → 5×2=10, add 10. List: [5, 10]"+" → get(0)+get(1) = 5+10=15, add 15. List: [5, 10, 15]Sum: 30 ✅Time Complexity: O(n) — single pass through operations Space Complexity: O(n) — ArrayList stores at most n scoresThe one tradeoff — remove(n-1) on an ArrayList is O(1) for the last element (no shifting needed). And get() is O(1). So this is fully O(n) overall and arguably the cleanest solution to read and understand.Approach 3: Deque (ArrayDeque — Fastest in Java)The IdeaArrayDeque is faster than Stack in Java because Stack is synchronized (thread-safe overhead) and ArrayDeque is not. For single-threaded LeetCode problems, ArrayDeque is always the better choice over Stack.public int calPoints(String[] operations) { Deque<Integer> deque = new ArrayDeque<>(); for (String op : operations) { if (op.equals("C")) { deque.pollLast(); // remove last } else if (op.equals("D")) { deque.offerLast(deque.peekLast() * 2); // double last } else if (op.equals("+")) { int prev1 = deque.pollLast(); int prev2 = deque.pollLast(); int sum = prev1 + prev2; deque.offerLast(prev2); // restore deque.offerLast(prev1); // restore deque.offerLast(sum); // new score } else { deque.offerLast(Integer.parseInt(op)); } } int total = 0; for (int score : deque) { total += score; } return total;}The logic is identical to the Stack approach. The only difference is the method names — offerLast instead of push, pollLast instead of pop, peekLast instead of peek.Time Complexity: O(n) Space Complexity: O(n)For iterating a Deque to sum, you can use a for-each loop directly — no need to pop everything out like with Stack.Approach ComparisonApproachTimeSpaceBest ForStackO(n)O(n)Classic interview answer, clear LIFO intentArrayListO(n)O(n)Cleanest code, easiest to readArrayDequeO(n)O(n)Best performance, preferred in productionAll three approaches have identical time and space complexity. The difference is purely in code style and readability. In an interview, any of these is perfectly acceptable. Mention that ArrayDeque is preferred over Stack in Java for performance if you want to impress.Why op.equals() Is Better Than op.charAt(0)Your original solution uses operations[i].charAt(0) == 'C' to check operations. This works but has a subtle risk — the + character check with charAt(0) is fine, but imagine if a future test had a number starting with C or D (it will not in this problem but defensive coding is a good habit). More importantly, "-2".charAt(0) is '-' which is fine, but using equals is semantically clearer — you are comparing the whole string, not just the first character. This shows cleaner coding habits in interviews.Edge Cases to KnowNegative numbers like "-2" Integer.parseInt("-2") handles negatives perfectly. The D operation on -2 gives -4. The + operation works correctly with negatives too. No special handling needed."C" after a "+" that added a score The problem guarantees C always has at least one score to remove. So after + adds a score, a C removes just that one new score — the previous two scores that + used remain intact. This is correct behavior and our solution handles it automatically.All scores removed If all operations are numbers followed by C operations removing every score, the stack ends up empty and the sum is 0. Our while loop handles this correctly — it simply never executes and returns 0.Only one operation A single number like ["5"] → push 5, sum is 5. Works fine.Common Mistakes to AvoidIn the + operation, forgetting to push both numbers back When you pop prev1 and prev2 to compute the sum, you must push them back onto the stack before pushing the sum. If you only push the sum without restoring prev1 and prev2, those scores disappear from the record permanently — which is wrong. The + operation only adds a new score, it does not remove the previous ones.Using charAt(0) comparison for detecting numbers Negative numbers start with -, not a digit. If you check charAt(0) >= '0' && charAt(0) <= '9' to detect numbers, you will miss negatives. The safest approach is to check for C, D, and + explicitly using equals, and fall through to the else for everything else (which covers both positive and negative numbers).Calling st.peek() or st.pop() without checking empty The problem guarantees valid operations — C always has something to remove, + always has two previous scores, D always has one. But in real code and defensive interview solutions, adding empty checks shows good habits even when the constraints guarantee safety.FAQs — People Also AskQ1. Why is Stack a good choice for LeetCode 682 Baseball Game? Because all four operations only access the most recently added scores — the last score for C and D, the last two for +. This "most recent first" access pattern is exactly what LIFO (Last In First Out) provides. Stack's push, pop, and peek all run in O(1) making it perfectly efficient.Q2. What is the time complexity of LeetCode 682? O(n) time where n is the number of operations. Each operation performs a constant number of stack operations (at most 3 pushes/pops for the + case). Space complexity is O(n) for storing scores.Q3. Why does the + operation need to pop and push back in the Stack approach? Stack only gives direct access to the top element. To get the second most recent score, you must pop the first one, peek/pop the second, then push the first one back. The ArrayList approach avoids this by using index-based access directly.Q4. What is the difference between Stack and ArrayDeque in Java for this problem? Both work correctly. ArrayDeque is faster because Stack is a legacy class that extends Vector and is synchronized (thread-safe), adding unnecessary overhead for single-threaded use. ArrayDeque has no synchronization overhead. For LeetCode and interviews, either is acceptable but ArrayDeque is technically better.Q5. Is LeetCode 682 asked in coding interviews? It appears occasionally as a warmup or screening problem. Its main value is testing whether you can carefully simulate rules without making logical errors — a skill that matters in systems programming, game development, and any domain with complex state management.Similar LeetCode Problems to Practice Next71. Simplify Path — Medium — stack simulation with path operations1047. Remove All Adjacent Duplicates In String — Easy — stack simulation735. Asteroid Collision — Medium — stack simulation with conditions150. Evaluate Reverse Polish Notation — Medium — stack with arithmetic operations, very similar pattern227. Basic Calculator II — Medium — stack with operator precedenceConclusionLeetCode 682 Baseball Game is a perfect problem to build confidence with stack simulation. The four operations are clearly defined, the rules are unambiguous, and the stack maps naturally to every operation. Once you understand why pop-push-back is needed for + in the stack approach and how ArrayList simplifies that with index access, you have genuinely understood the tradeoffs between these data structures.If you are newer to stacks, start with the ArrayList solution for clarity. Once that clicks, rewrite it with Stack to understand the LIFO mechanics. Then try ArrayDeque to understand why it is preferred in modern Java code.

LeetCodeJavaStackArrayListDequeEasy
LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

LeetCode 121 — Best Time to Buy and Sell Stock I | Two Pointer / Sliding Window Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock/Understanding the ProblemYou are given an array prices where prices[i] is the stock price on day i. You can buy on one day and sell on a later day. You want to maximize your profit.Goal: Return the maximum profit possible. If no profit can be made, return 0.Key Rules:You must buy before you sell — no going back in time.You can only make one transaction (one buy, one sell).If prices only go down, profit is impossible — return 0.Constraints:1 ≤ prices.length ≤ 10⁵0 ≤ prices[i] ≤ 10⁴Understanding the Problem With a Real-World AnalogyImagine you're watching a stock ticker for a week. You want to pick the single best day to buy and a later day to sell. You can't predict the future — you just have historical prices. The question is: looking back at all the prices, what was the best single buy-sell pair you could have chosen?Key Observation (Before Writing a Single Line of Code)The profit on any given pair of days is:profit = prices[sell_day] - prices[buy_day]To maximize profit, for every potential sell day, we want the lowest possible buy price seen so far to the left of it. This is the core insight that drives the efficient solution.If at any point the current price is lower than our tracked minimum, there is no point keeping the old minimum — the new one is strictly better for any future sell day.Intuition — The Two Pointer ApproachWe use two pointers i (buy pointer) and j (sell pointer), both starting at the beginning with i = 0 and j = 1.At every step we ask: is prices[j] greater than prices[i]?If yes → this is a profitable pair. Compute the profit and update the maximum if it's better.If no → prices[j] is cheaper than prices[i]. There's no reason to keep i where it is — any future sell day would be better served by buying at j instead. So we move i to j.Either way, j always moves forward by 1 until it reaches the end of the array.Why Moving i to j is CorrectThis is the most important conceptual point. When prices[j] < prices[i], you might wonder — why not just skip j and move on? Why move i?Because for any future day k > j, the profit buying at j is:prices[k] - prices[j]And the profit buying at i is:prices[k] - prices[i]Since prices[j] < prices[i], buying at j will always give a higher or equal profit for the same sell day k. So we update i = j — there is no scenario where keeping the old i is better.Dry Run — Example 1 (Step by Step)Input: prices = [7, 1, 5, 3, 6, 4]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 17 > 1 → prices[j] is cheaper. Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 1, prices[j] = 51 < 5 → Profitable! profit = 5 - 1 = 4. 4 > 0 → update maxP = 4. j moves to 3.Step 3: i = 1, j = 3 → prices[i] = 1, prices[j] = 31 < 3 → Profitable! profit = 3 - 1 = 2. 2 < 4 → maxP stays 4. j moves to 4.Step 4: i = 1, j = 4 → prices[i] = 1, prices[j] = 61 < 6 → Profitable! profit = 6 - 1 = 5. 5 > 4 → update maxP = 5. j moves to 5.Step 5: i = 1, j = 5 → prices[i] = 1, prices[j] = 41 < 4 → Profitable! profit = 4 - 1 = 3. 3 < 5 → maxP stays 5. j moves to 6. j = 6 = prices.length → loop ends.Output: maxP = 5 ✅ (Buy at day 2 price=1, sell at day 5 price=6)Dry Run — Example 2 (Step by Step)Input: prices = [7, 6, 4, 3, 1]We start with i = 0, j = 1, maxP = 0.Step 1: i = 0, j = 1 → prices[i] = 7, prices[j] = 67 > 6 → Move i to j. i = 1, j moves to 2. maxP = 0.Step 2: i = 1, j = 2 → prices[i] = 6, prices[j] = 46 > 4 → Move i to j. i = 2, j moves to 3. maxP = 0.Step 3: i = 2, j = 3 → prices[i] = 4, prices[j] = 34 > 3 → Move i to j. i = 3, j moves to 4. maxP = 0.Step 4: i = 3, j = 4 → prices[i] = 3, prices[j] = 13 > 1 → Move i to j. i = 4, j moves to 5. j = 5 = prices.length → loop ends.Output: maxP = 0 ✅ (Prices only went down, no profitable transaction exists)The Code — Implementationclass Solution {public int maxProfit(int[] prices) {int i = 0; // Buy pointer — points to the current minimum price dayint j = 1; // Sell pointer — scans forward through the arrayint maxP = 0; // Tracks the maximum profit seen so far// j scans from day 1 to the end of the arraywhile (i < j && j < prices.length) {if (prices[i] > prices[j]) {// prices[j] is cheaper than current buy price// Move buy pointer to j — buying here is strictly better// for any future sell dayi = j;} else {// prices[j] > prices[i] — this is a profitable pair// Calculate profit and update maxP if it's the best so farint profit = prices[j] - prices[i];if (profit > maxP) {maxP = profit;}}// Always move the sell pointer forwardj++;}// If no profitable transaction was found, maxP remains 0return maxP;}}Code Walkthrough — Step by StepInitialization: i = 0 acts as our buy day pointer (always pointing to the lowest price seen so far). j = 1 is our sell day pointer scanning forward. maxP = 0 is our answer — defaulting to 0 handles the case where no profit is possible.The loop condition: i < j ensures we never sell before buying. j < prices.length ensures we don't go out of bounds.When prices[i] > prices[j]: The current sell day price is cheaper than our buy day. We update i = j, because buying at j dominates buying at i for all future sell days.When prices[j] >= prices[i]: We have a potential profit. We compute it and update maxP if it beats the current best.j always increments: Regardless of which branch we take, j++ moves the sell pointer forward every iteration — this is what drives the loop to completion.Common Mistakes to AvoidNot returning 0 for no-profit cases: If prices are strictly decreasing, maxP never gets updated and stays 0. The initialization maxP = 0 handles this correctly — never initialize it to a negative number.Using a nested loop (brute force): A common first instinct is two nested loops checking every pair. That is O(N²) and will TLE on large inputs. The two pointer approach solves it in O(N).Thinking you need to sort: Sorting destroys the order of days, which is fundamental to the problem. Never sort the prices array here.Moving i forward by 1 instead of to j: When prices[j] < prices[i], some people write i++ instead of i = j. This is wrong — you might miss the new minimum entirely and waste iterations.Complexity AnalysisTime Complexity: O(N) The j pointer traverses the array exactly once from index 1 to the end. The i pointer only moves forward and never exceeds j. So the entire algorithm is a single linear pass — O(N).Space Complexity: O(1) No extra arrays, maps, or stacks are used. Only three integer variables — i, j, and maxP.Alternative Way to Think About It (Min Tracking)Some people write this problem using a minPrice variable instead of two pointers. Both approaches are equivalent — the two pointer framing is slightly more intuitive visually, but here is the min-tracking version for reference:int minPrice = Integer.MAX_VALUE;int maxProfit = 0;for (int price : prices) {if (price < minPrice) {minPrice = price;} else if (price - minPrice > maxProfit) {maxProfit = price - minPrice;}}return maxProfit;The logic is the same — always track the cheapest buy day seen so far, and for each day compute what profit would look like if you sold today.Similar Problems (Build on This Foundation)LeetCode 122 — Best Time to Buy and Sell Stock II (multiple transactions allowed) [ Blog is also avaliable on this - Read Now ]LeetCode 123 — Best Time to Buy and Sell Stock III (at most 2 transactions) [ Blog is also avaliable on this - Read Now ]LeetCode 188 — Best Time to Buy and Sell Stock IV (at most k transactions)LeetCode 309 — Best Time to Buy and Sell Stock with CooldownLeetCode 714 — Best Time to Buy and Sell Stock with Transaction FeeLeetCode 121 is the foundation. Master this one deeply before moving to the others — they all build on the same core idea.Key Takeaways✅ For every potential sell day, the best buy day is always the minimum price seen to its left — this drives the whole approach.✅ When the sell pointer finds a cheaper price than the buy pointer, always move the buy pointer there — it strictly dominates the old buy day for all future sells.✅ Initialize maxP = 0 so that no-profit scenarios (prices only going down) are handled automatically.✅ The two pointer approach solves this in a single linear pass — O(N) time and O(1) space.✅ j always increments every iteration — i only moves when a cheaper price is found.Happy Coding! If this clicked for you, the entire Stock series on LeetCode will feel much more approachable.

LeetCodeTwo PointersEasyJavaDSA
Contains Duplicate

Contains Duplicate

LeetCode Problem 217Link of the Problem to try -: LinkGiven an integer array nums, return true if any value appears at least twice in the array, and return false if every element is distinct.Example 1:Input: nums = [1,2,3,1]Output: trueExplanation:The element 1 occurs at the indices 0 and 3.Example 2:Input: nums = [1,2,3,4]Output: falseExplanation:All elements are distinct.Example 3:Input: nums = [1,1,1,3,3,4,3,2,4,2]Output: trueConstraints:1 <= nums.length <= 105-109 <= nums[i] <= 109Solution:1. The HashMap ApproachUsing a HashMap is a highly effective way to track occurrences. Although we are only checking for the existence of a value, the HashMap logic allows us to store the element as a "Key" and its frequency as a "Value."Logic:Iterate through the nums array.Before inserting an element, check if the HashMap already contains that key.If it exists, you've found your duplicate—return true.Otherwise, put the value into the map and continue.Code:public boolean containsDuplicate(int[] nums) {HashMap<Integer, Integer> map = new HashMap<>();for (int i = 0; i < nums.length; i++) {// Check if the current number is already a key in our mapif (map.containsKey(nums[i])) {return true;}// Map the number to its count (1)map.put(nums[i], 1);}return false;}2. The HashSet Approach (Optimized for Storage)While similar to the HashMap, the HashSet is often more appropriate for this specific problem because we only care if a number exists, not how many times it appears or what its index is.Logic:We initialize an empty HashSet.As we loop through the array, we check ms.contains(nums[i]).If the set already has the number, it's a duplicate.This approach is preferred over HashMap for this problem because it uses less memory and has cleaner syntax.Code:public boolean containsDuplicate(int[] nums) {HashSet<Integer> ms = new HashSet<>();for (int i = 0; i < nums.length; i++) {if (ms.contains(nums[i])) {return true;}ms.add(nums[i]);}return false;}3. The Sorting & Two-Pointer ApproachIf you want to avoid using extra memory (like a Set or Map), you can use the Two-Pointer method combined with Sorting.Logic:First, we sort the array. This ensures that any duplicate values are placed next to each other.We use two pointers: j (the previous element) and i (the current element).By moving these pointers across the array, we compare the values. If nums[i] == nums[j], a duplicate exists.Code:public boolean containsDuplicate(int[] nums) {// Step 1: Sort the arrayArrays.sort(nums);// Step 2: Use two pointers to compare adjacent elementsint j = 0;for (int i = 1; i < nums.length; i++) {if (nums[i] == nums[j]) {return true; // Duplicate found}j++; // Move the previous pointer forward}return false;}Performance SummaryApproachTime ComplexitySpace ComplexityRecommendationHashSetO(n)O(n)Best Overall – Fastest performance.HashMapO(n)O(n)Good, but HashSet is cleaner for this use case.Two PointerO(n \log n)O(1)Best for Memory – Use if space is limited.Final ThoughtsChoosing the right approach depends on whether you want to prioritize speed (HashSet) or memory efficiency (Two-Pointer). For most coding interviews, the HashSet solution is the "Gold Standard" due to its linear time complexity.

LeetCodeEasyHashMap
Permutation with Spaces Explained Using Recursion & Decision Tree | Java Solution GFG

Permutation with Spaces Explained Using Recursion & Decision Tree | Java Solution GFG

IntroductionThe Permutation with Spaces problem is a classic recursion question that helps build a strong understanding of decision-making and backtracking patterns.Instead of generating permutations by rearranging characters, this problem focuses on inserting spaces between characters in all possible ways.What makes this problem powerful is its decision tree structure, which you’ve already visualized perfectly. In this article, we will directly connect that intuition with code.Link of Problem: GeeksforGeeks – Permutation with SpacesProblem StatementGiven a string s, generate all possible strings by placing:Either a spaceOr no spacebetween every pair of characters.Return all results in sorted order.ExampleInput:s = "ABC"Output:A B CA BCAB CABCUnderstanding Your Decision Tree (Very Important)Two Choices at Each Step:❌ Do NOT add space before the character✔️ Add space before the characterMapping TreeFrom diagram:At B:"AB" → no space"A B" → spaceAt C:From "AB":"ABC""AB C"From "A B":"A BC""A B C"Final Output (Leaf Nodes)As shown in your diagram:ABC, AB C, A BC, A B C📌 This is exactly what recursion generates.Key InsightAt every index (except first), we have:2 choices → space OR no spaceSo total combinations:2^(n-1)Approach: Recursion + Decision MakingIdeaFix the first characterFor every next character:Add space + characterAdd character directlyContinue recursivelyJava Code with Detailed Commentsimport java.util.*;class Solution { // List to store all results ArrayList<String> lis = new ArrayList<>(); void solve(String s, int ind, String curr) { // Base case: // If index reaches end of string, // we have formed one valid permutation if (ind == s.length()) { lis.add(curr); // store the result return; } // Choice 1: Add SPACE before current character // Example: "A" → "A B" solve(s, ind + 1, curr + " " + s.charAt(ind)); // Choice 2: Do NOT add space // Example: "A" → "AB" solve(s, ind + 1, curr + s.charAt(ind)); } ArrayList<String> permutation(String s) { // Start with first character (no space before it) String curr = "" + s.charAt(0); // Start recursion from index 1 solve(s, 1, curr); // Sort results as required in problem Collections.sort(lis); return lis; }}Step-by-Step Execution (Using Your Tree)For "ABC":Start → "A"At "B":"AB""A B"At "C":"ABC", "AB C""A BC", "A B C"Exactly matches your decision tree leaf nodes ✅Complexity AnalysisTime Complexity: O(2ⁿ)Space Complexity: O(2ⁿ)Why This Approach WorksRecursion explores every possible choiceEach level = one characterEach branch = decision (space / no space)Leaf nodes = final answersKey TakeawaysThis is a binary decision recursion problemAlways identify:ChoicesBase conditionYour decision tree = direct blueprint of recursionSame pattern applies to:SubsetsBinary choices problemsConclusionThe Permutation with Spaces problem becomes extremely simple once the decision tree is understood—and your diagram already captures that perfectly.The recursion directly follows the same structure:Every branch = one decisionEvery leaf = one answerMaster this pattern, and you’ll find many recursion problems much easier to solve.

MediumGeeksforGeeksRecursionJava
Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction — Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it — welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story — Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer — the US Secretary of Defense, Charles Wilson — would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense — planning and decision-making — not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember — even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties — it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization — remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation — fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints — every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 — Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation — change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 — Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 — Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 — 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases — first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation — change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 — Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}💡 How to use these templates:Step 1 — Identify which pattern your problem fits into. Step 2 — Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 — Write your recurrence relation on paper first. Step 4 — Plug it into the matching template above. Step 5 — Handle your specific base cases carefully.🎥 Visual Learning Resource — Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp — a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 — 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 — 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 — Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 — Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 — DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 — DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 — DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner — Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate — Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced — Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance — a classic DP problem — between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms — both DP — to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints — essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm — used in speech recognition, part-of-speech tagging, and machine translation — is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines — minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential — terribleFibonacci (DP)O(n)O(1) with optimizationLinear — excellentLongest Common SubsequenceO(m × n)O(m × n)m, n = lengths of two stringsEdit DistanceO(m × n)O(m × n)Can optimize space to O(n)0/1 KnapsackO(n × W)O(n × W)n = items, W = capacityCoin ChangeO(n × amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(n²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(n³)O(n²)Interval DPTravelling Salesman (bitmask)O(2ⁿ × n²)O(2ⁿ × n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming — Step by StepHere is an honest, structured path to mastery:Step 1 — Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 — Start with the classics. Fibonacci → Climbing Stairs → House Robber → Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 — Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 — Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 — Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 — Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 — Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 — Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle — it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 — Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 — Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 — Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 — Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 — Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 — Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 — Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) — not just item index — leads to fundamentally wrong solutions.Mistake 8 — Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization — or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap — meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an n×m table with just two arrays (or even one), reducing space complexity from O(n×m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts — Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice — top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
LeetCode 122 — Best Time to Buy and Sell Stock II | Every Approach Explained

LeetCode 122 — Best Time to Buy and Sell Stock II | Every Approach Explained

🚀 Try This Problem First!Before reading the solution, attempt it yourself on LeetCode — you'll retain the concept far better.🔗 Problem Link: https://leetcode.com/problems/best-time-to-buy-and-sell-stock-ii/Understanding the ProblemYou are given an array prices where prices[i] is the stock price on day i. Unlike the classic version, here you can make as many transactions as you want — but you can only hold one share at a time. You may buy and sell on the same day.Goal: Return the maximum total profit achievable.Key Rules:You can buy and sell multiple times.You cannot hold more than one share at a time — you must sell before buying again.If no profit is possible, return 0.Constraints:1 ≤ prices.length ≤ 3 × 10⁴0 ≤ prices[i] ≤ 10⁴How This Differs From LeetCode 121 #In LeetCode 121, you were limited to exactly one buy-sell transaction. Here, the restriction is lifted — you can participate in as many transactions as you want. This fundamentally changes the strategy. Instead of hunting for the single best pair, you want to capture every profitable price movement in the array.The Core InsightLook at the price chart mentally. Every time the price goes up from one day to the next, that's money on the table. The question is — how do you collect all of it?The answer is surprisingly simple: add every single upward price difference to your profit. If prices go up three days in a row from 1 → 3 → 5 → 8, you collect (3-1) + (5-3) + (8-5) = 7, which is exactly the same as buying at 1 and selling at 8. You never miss a gain.This is the foundation of all approaches below.Approach 1 — Simple Greedy (Collect Every Upward Move)Intuition: Every time prices[i] > prices[i-1], add the difference to profit. You are essentially buying at every valley and selling at every peak, collecting each individual daily gain without explicitly tracking buy/sell days.Why it works: The total gain from buying at day 0 and selling at day N is mathematically equal to the sum of all positive daily differences in between. You never lose anything by collecting gains day by day.Example: prices = [1, 2, 3, 4, 5] Daily gains: (2-1) + (3-2) + (4-3) + (5-4) = 1+1+1+1 = 4 Same as buying at 1 and selling at 5 directly.class Solution {public int maxProfit(int[] prices) {int maxProfit = 0;for (int i = 1; i < prices.length; i++) {if (prices[i] > prices[i - 1]) {maxProfit += prices[i] - prices[i - 1];}}return maxProfit;}}Time Complexity: O(N) — single pass through the array. Space Complexity: O(1) — no extra space used.This is the cleanest and most recommended solution for this problem.Approach 2 — Peak Valley ApproachIntuition: Instead of collecting every daily gain, explicitly find every valley (local minimum) to buy at and every peak (local maximum) to sell at. You buy when price stops falling and sell when price stops rising.How it works: Scan through the array. When you find a valley (prices[i] ≤ prices[i+1]), that is your buy point. Then keep going until you find a peak (prices[i] ≥ prices[i+1]) — that is your sell point. Add the peak minus valley to profit. Repeat.Example: prices = [7, 1, 5, 3, 6, 4]Valley at index 1 (price = 1), Peak at index 2 (price = 5) → profit += 4 Valley at index 3 (price = 3), Peak at index 4 (price = 6) → profit += 3 Total = 7 ✅class Solution {public int maxProfit(int[] prices) {int i = 0;int maxProfit = 0;int valley, peak;while (i < prices.length - 1) {while (i < prices.length - 1 && prices[i] >= prices[i + 1]) {i++;}valley = prices[i];while (i < prices.length - 1 && prices[i] <= prices[i + 1]) {i++;}peak = prices[i];maxProfit += peak - valley;}return maxProfit;}}Time Complexity: O(N) — each element is visited at most twice. Space Complexity: O(1) — no extra space used.This approach is more explicit and easier to visualize on a graph, though the code is slightly more involved than Approach 1.Approach 3 — Two PointerIntuition: Use two pointers i (buy day) and j (sell day). Move j forward one step at a time. Whenever prices[j] > prices[i], you have a profitable window — add the profit and immediately move i to j (simulate selling and rebuying at the same price on the same day). Whenever prices[j] < prices[i], just move i to j since a cheaper buy day has been found.Why moving i to j after every profitable sale works: Selling at j and immediately rebuying at j costs nothing (profit of 0 for that rebuy). But it positions i at the latest price so you can catch the next upward movement. This correctly simulates collecting every upward segment.Example: prices = [7, 1, 5, 3, 6, 4]i=0, j=1 → 7 > 1, move i to 1. j=2. i=1, j=2 → 1 < 5, profit += 4, move i to 2. j=3. i=2, j=3 → 5 > 3, move i to 3. j=4. i=3, j=4 → 3 < 6, profit += 3, move i to 4. j=5. i=4, j=5 → 6 > 4, move i to 5. j=6. Loop ends. Total profit = 7 ✅class Solution {public int maxProfit(int[] prices) {int i = 0;int j = 1;int maxProfit = 0;while (i < j && j < prices.length) {if (prices[i] > prices[j]) {i = j;} else {maxProfit += prices[j] - prices[i];i = j;}j++;}return maxProfit;}}Time Complexity: O(N) — j traverses the array exactly once. Space Complexity: O(1) — only three integer variables.This approach is functionally identical to Approach 1 — both collect every upward daily movement. The two pointer framing makes the buy/sell simulation more explicit.Approach 4 — Dynamic ProgrammingIntuition: At any point in time, you are in one of two states — either you hold a stock or you do not hold a stock. Define two DP values updated each day:hold = maximum profit if you are currently holding a stock at the end of this day.cash = maximum profit if you are not holding any stock at the end of this day.Transitions:To hold on day i: either you already held yesterday, or you buy today. hold = max(hold, cash - prices[i])To have cash on day i: either you already had cash yesterday, or you sell today. cash = max(cash, hold + prices[i])Initialization:hold = -prices[0] (you bought on day 0)cash = 0 (you did nothing on day 0)class Solution {public int maxProfit(int[] prices) {int hold = -prices[0];int cash = 0;for (int i = 1; i < prices.length; i++) {hold = Math.max(hold, cash - prices[i]);cash = Math.max(cash, hold + prices[i]);}return cash;}}Time Complexity: O(N) — single pass. Space Complexity: O(1) — only two variables maintained at each step.This approach is the most powerful because it extends naturally to harder variants of this problem — like LeetCode 309 (with cooldown) and LeetCode 714 (with transaction fee) — where greedy no longer works and you need explicit state tracking.Dry Run — All Approaches on Example 1Input: prices = [7, 1, 5, 3, 6, 4], Expected Output: 7Approach 1 (Simple Greedy): Day 1→2: 1 - 7 = -6, skip. Day 2→3: 5 - 1 = 4, add. profit = 4. Day 3→4: 3 - 5 = -2, skip. Day 4→5: 6 - 3 = 3, add. profit = 7. Day 5→6: 4 - 6 = -2, skip. Result = 7 ✅Approach 4 (DP): Start: hold = -7, cash = 0. Day 1 (price=1): hold = max(-7, 0-1) = -1. cash = max(0, -1+1) = 0. Day 2 (price=5): hold = max(-1, 0-5) = -1. cash = max(0, -1+5) = 4. Day 3 (price=3): hold = max(-1, 4-3) = 1. cash = max(4, 1+3) = 4. Day 4 (price=6): hold = max(1, 4-6) = 1. cash = max(4, 1+6) = 7. Day 5 (price=4): hold = max(1, 7-4) = 3. cash = max(7, 3+4) = 7. Result = 7 ✅Comparison of All ApproachesApproach 1 — Simple Greedy Code simplicity: Simplest possible. Best for interviews — clean and readable. Does not extend to constrained variants.Approach 2 — Peak Valley Code simplicity: Moderate. Best for visual/conceptual understanding. Slightly verbose but maps directly to a chart.Approach 3 — Two Pointer Code simplicity: Simple. Explicit simulation of buy/sell actions. Functionally identical to Approach 1.Approach 4 — Dynamic Programming Code simplicity: Moderate. Most powerful — extends to cooldown, fee, and k-transaction variants. Worth mastering for the full stock problem series.Common Mistakes to AvoidThinking you need to find exact buy/sell days: The problem only asks for maximum profit — you do not need to output which days you traded. This frees you to use the simple greedy sum approach.Trying to find the global minimum and maximum: Unlike LeetCode 121, the single best buy-sell pair is not always optimal here. You need to capture multiple smaller movements, not one big one.Holding more than one share: You cannot buy twice in a row without selling in between. In Approach 3, moving i = j after every transaction ensures you always sell before the next buy.Not handling a flat or decreasing array: If prices never go up, all approaches correctly return 0 — the greedy sum adds nothing, peak-valley finds no valid pairs, and DP's cash stays at 0.Complexity SummaryAll four approaches run in O(N) time and O(1) space. The difference between them is conceptual clarity and extensibility, not raw performance.The Full Stock Problem Series on LeetCodeThis problem is part of a six-problem series. Understanding them in order builds intuition progressively:LeetCode 121 — One transaction only. Two pointer / min tracking greedy. [ Blog is also avaliable on this - Read Now]LeetCode 123 — At most 2 transactions. DP with explicit state for two transactions. [ Blog is also avaliable on this - Read Now]LeetCode 188 — At most k transactions. Generalized DP.LeetCode 309 — Unlimited transactions with cooldown after selling. DP with three states.LeetCode 714 — Unlimited transactions with a fee per transaction. DP with adjusted transitions.Each problem adds one constraint on top of the previous. If you understand the DP state machine from Approach 4 deeply, every problem in this series becomes a small modification of the same framework.Key Takeaways✅ When transactions are unlimited, collect every upward daily price movement — that is the global optimum.✅ The sum of all positive daily differences equals the sum of all peak-valley differences. Both are provably optimal.✅ The two pointer approach explicitly simulates buy and sell events — moving i = j after a sale means selling and immediately rebuying at the same price to stay positioned for the next gain.✅ The DP approach with hold and cash states is the most versatile — it is the foundation for every harder variant in the stock series.✅ Always initialize maxProfit = 0 so that the no-profit case (prices only falling) is handled correctly without extra conditions.Happy Coding! Once you have this problem locked down, the rest of the stock series will feel like natural extensions rather than new problems entirely. 🚀

LeetCodeGreedyTwo PointersDynamic ProgrammingMediumJavaArrays
Valid Anagram – Frequency Counting Pattern Explained (LeetCode 242)

Valid Anagram – Frequency Counting Pattern Explained (LeetCode 242)

🔗 Problem LinkLeetCode 242 – Valid Anagram 👉 https://leetcode.com/problems/valid-anagram/IntroductionThis is one of the most important string frequency problems in coding interviews.The idea of checking whether two strings are anagrams appears in many variations:Group AnagramsRansom NoteFind the DifferencePermutation in StringIf you master this pattern, you unlock a whole category of problems.Let’s break it down step by step.📌 Problem UnderstandingTwo strings are anagrams if:They contain the same charactersWith the same frequenciesOrder does not matterExample 1Input: s = "anagram" t = "nagaram"Output: trueBoth contain:a → 3n → 1g → 1r → 1m → 1Example 2Input: s = "rat" t = "car"Output: falseDifferent character frequencies.🧠 IntuitionThe core idea:If two strings are anagrams, their character frequencies must match exactly.So we:Check if lengths are equal.Count frequency of characters in first string.Subtract frequencies using second string.If at any point frequency becomes negative → not anagram.💻 Your Codeclass Solution { public boolean isAnagram(String s, String t) { if(s.length() != t.length()) return false; HashMap<Character,Integer> mp = new HashMap<>(); for(int i =0;i<s.length();i++){ mp.put(s.charAt(i),mp.getOrDefault(s.charAt(i),0)+1); } for(int i =0; i < t.length();i++){ if(mp.containsKey(t.charAt(i)) && mp.get(t.charAt(i)) > 0){ mp.put(t.charAt(i),mp.get(t.charAt(i))-1); }else{ return false; } } return true; }}🔍 Step-by-Step Explanation1️⃣ Length Checkif(s.length() != t.length()) return false;If lengths differ → cannot be anagrams.2️⃣ Build Frequency Mapmp.put(s.charAt(i), mp.getOrDefault(s.charAt(i), 0) + 1);Count occurrences of each character in s.3️⃣ Subtract Using Second Stringif(mp.containsKey(t.charAt(i)) && mp.get(t.charAt(i)) > 0)If character exists and frequency is available → reduce it.Otherwise → return false immediately.🎯 Why This WorksWe treat:First string as frequency builderSecond string as frequency consumerIf all frequencies match perfectly, we return true.⏱ Complexity AnalysisTime Complexity: O(n)One pass to build mapOne pass to compareSpace Complexity: O(26) ≈ O(1)Only lowercase English letters allowed.🔥 Optimized Approach – Using Array Instead of HashMapSince the problem guarantees:s and t consist of lowercase English lettersWe can replace HashMap with an integer array of size 26.This is faster and cleaner.Optimized Versionclass Solution { public boolean isAnagram(String s, String t) { if(s.length() != t.length()) return false; int[] freq = new int[26]; for(char c : s.toCharArray()){ freq[c - 'a']++; } for(char c : t.toCharArray()){ freq[c - 'a']--; if(freq[c - 'a'] < 0){ return false; } } return true; }}🚀 Why This Is BetterNo HashMap overheadDirect index accessCleaner codeFaster in interviews🏁 Final ThoughtsThis problem teaches:Frequency counting patternEarly exit optimizationUsing arrays instead of HashMap when character set is limitedThinking in terms of resource balanceValid Anagram is a foundation problem.Once you master it, you can easily solve:Ransom NoteFind the DifferenceGroup AnagramsCheck Equal Character Occurrences

HashMapStringFrequency CountArraysLeetCodeEasy
Queue Data Structure Complete Guide - Java Explained With All Operations

Queue Data Structure Complete Guide - Java Explained With All Operations

IntroductionIf you have been learning Data Structures and Algorithms, you have probably already spent time with arrays, linked lists, and stacks. Now it is time to meet one of the most important and widely used data structures in computer science — the Queue.Queue is not just a theoretical concept. It powers some of the most critical systems you use every day — from how your printer handles jobs, to how your CPU schedules tasks, to how Google Maps finds the shortest path between two locations. Understanding Queue deeply means understanding how real systems work.In this complete guide we will cover absolutely everything — what a Queue is, how it differs from a Stack, every type of Queue, all operations with code, Java implementations, time and space complexity, common interview questions, and the most important LeetCode problems that use Queue.What Is a Queue?A Queue is a linear data structure that follows the FIFO principle — First In First Out. This means the element that was added first is the one that gets removed first.Think of it exactly like a real-world queue (a line of people). The person who joined the line first gets served first. No cutting in line, no serving from the back — strict order from front to back.This is the fundamental difference between a Queue and a Stack:Stack → LIFO (Last In First Out) — like a stack of plates, you take from the topQueue → FIFO (First In First Out) — like a line of people, you serve from the frontReal Life Examples of QueueBefore writing a single line of code, let us understand where queues appear in real life. This will make every technical concept feel natural.Printer Queue — when you send multiple documents to print, they print in the order they were sent. The first document sent prints first.CPU Task Scheduling — your operating system manages running processes in a queue. Tasks get CPU time in the order they arrive (in basic scheduling).Customer Service Call Center — when you call a helpline and are put on hold, you are placed in a queue. The first caller on hold gets connected first.WhatsApp Messages — messages are delivered in the order they are sent. The first message sent is the first one received.BFS (Breadth First Search) — every time you use Google Maps or any navigation app to find the shortest path, it uses BFS internally which is entirely powered by a Queue.Ticket Booking Systems — online booking portals process requests in the order they arrive. First come first served.Queue Terminology — Key Terms You Must KnowBefore diving into code, let us get the vocabulary right:Front — the end from which elements are removed (dequeued). This is where the "first person in line" stands.Rear (or Back) — the end at which elements are added (enqueued). New arrivals join here.Enqueue — the operation of adding an element to the rear of the queue. Like joining the back of a line.Dequeue — the operation of removing an element from the front of the queue. Like the first person in line being served and leaving.Peek (or Front) — looking at the front element without removing it. Like seeing who is first in line without serving them yet.isEmpty — checking whether the queue has no elements.isFull — relevant for fixed-size queues, checking whether no more elements can be added.Types of QueuesThis is where most beginners get confused. There is not just one type of Queue — there are several variations each designed to solve specific problems.1. Simple Queue (Linear Queue)The most basic form. Elements enter from the rear and leave from the front. Strict FIFO, nothing fancy.Enqueue → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue rear frontProblem with Simple Queue: In array-based implementation, once elements are dequeued from the front, those slots cannot be reused even if there is space. This wastes memory. This is why Circular Queue was invented.2. Circular QueueIn a Circular Queue, the rear wraps around to the front when it reaches the end of the array. The last position connects back to the first, forming a circle. This solves the wasted space problem of simple queues. [1] [2] [3] / \ [6] [4] \ / [5] ← rearUsed in: CPU scheduling, memory management, traffic light systems, streaming buffers.3. Double Ended Queue (Deque)A Deque (pronounced "deck") allows insertion and deletion from both ends — front and rear. It is the most flexible queue type.Enqueue Front → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue FrontEnqueue Rear → [ 1 | 2 | 3 | 4 | 5 ] → Dequeue RearTwo subtypes:Input Restricted Deque — insertion only at rear, deletion from both endsOutput Restricted Deque — deletion only at front, insertion at both endsUsed in: browser history (back and forward), undo-redo operations, sliding window problems.4. Priority QueueElements are not served in FIFO order — instead each element has a priority and the element with the highest priority is served first regardless of when it was added.Think of an emergency room. A patient with a critical injury jumps ahead of someone with a minor cut even if they arrived later.Two types:Max Priority Queue — highest value = highest priorityMin Priority Queue — lowest value = highest priorityUsed in: Dijkstra's shortest path, Huffman encoding, A* search algorithm, task scheduling with priorities.5. Blocking QueueA thread-safe queue used in multi-threading. If the queue is empty, a thread trying to dequeue will wait (block) until an element is available. If the queue is full, a thread trying to enqueue will wait until space is available.Used in: Producer-Consumer problems, thread pool implementations, Java's java.util.concurrent package.Queue Operations and Time ComplexityEvery queue operation has a specific time complexity that you must know cold for interviews.OperationDescriptionTime ComplexityEnqueueAdd element to rearO(1)DequeueRemove element from frontO(1)Peek/FrontView front elementO(1)isEmptyCheck if queue is emptyO(1)SizeNumber of elementsO(1)SearchFind a specific elementO(n)Space Complexity: O(n) — where n is the number of elements stored.All core queue operations are O(1). This is what makes Queue so powerful — no matter how many elements are in the queue, adding and removing always takes constant time.Implementing Queue in Java — All WaysJava gives you multiple ways to use a Queue. Let us go through each one.Way 1: Using LinkedList (Most Common)LinkedList implements the Queue interface in Java. This is the most commonly used Queue implementation.import java.util.LinkedList;import java.util.Queue;Queue<Integer> queue = new LinkedList<>();// Enqueue — add to rearqueue.offer(10);queue.offer(20);queue.offer(30);// Peek — view front without removingSystem.out.println(queue.peek()); // 10// Dequeue — remove from frontSystem.out.println(queue.poll()); // 10System.out.println(queue.poll()); // 20// Check emptySystem.out.println(queue.isEmpty()); // false// SizeSystem.out.println(queue.size()); // 1offer() vs add() — both add to the queue. add() throws an exception if the queue is full (for bounded queues). offer() returns false instead. Always prefer offer().poll() vs remove() — both remove from front. remove() throws an exception if queue is empty. poll() returns null. Always prefer poll().peek() vs element() — both view the front. element() throws exception if empty. peek() returns null. Always prefer peek().Way 2: Using ArrayDeque (Fastest)ArrayDeque is faster than LinkedList for Queue operations because it uses a resizable array internally with no node allocation overhead.import java.util.ArrayDeque;import java.util.Queue;Queue<Integer> queue = new ArrayDeque<>();queue.offer(1);queue.offer(2);queue.offer(3);System.out.println(queue.peek()); // 1System.out.println(queue.poll()); // 1System.out.println(queue.size()); // 2When to use ArrayDeque over LinkedList? Use ArrayDeque whenever possible for Queue or Stack operations. It is faster because it avoids the overhead of node objects that LinkedList creates for every element. In competitive programming and interviews, ArrayDeque is the preferred choice.Way 3: Using Deque (Double Ended Queue)import java.util.ArrayDeque;import java.util.Deque;Deque<Integer> deque = new ArrayDeque<>();// Add to frontdeque.offerFirst(10);// Add to reardeque.offerLast(20);deque.offerLast(30);// Remove from frontSystem.out.println(deque.pollFirst()); // 10// Remove from rearSystem.out.println(deque.pollLast()); // 30// Peek front and rearSystem.out.println(deque.peekFirst()); // 20System.out.println(deque.peekLast()); // 20Way 4: Using PriorityQueueimport java.util.PriorityQueue;// Min Heap — smallest element has highest priorityPriorityQueue<Integer> minPQ = new PriorityQueue<>();minPQ.offer(30);minPQ.offer(10);minPQ.offer(20);System.out.println(minPQ.poll()); // 10 — smallest comes out first// Max Heap — largest element has highest priorityPriorityQueue<Integer> maxPQ = new PriorityQueue<>((a, b) -> b - a);maxPQ.offer(30);maxPQ.offer(10);maxPQ.offer(20);System.out.println(maxPQ.poll()); // 30 — largest comes out firstWay 5: Implementing Queue From Scratch Using ArrayUnderstanding the underlying implementation helps you in interviews when asked to build one from scratch.class MyQueue { private int[] arr; private int front; private int rear; private int size; private int capacity; public MyQueue(int capacity) { this.capacity = capacity; arr = new int[capacity]; front = 0; rear = -1; size = 0; } public void enqueue(int val) { if (size == capacity) { System.out.println("Queue is full!"); return; } rear = (rear + 1) % capacity; // circular wrapping arr[rear] = val; size++; } public int dequeue() { if (isEmpty()) { System.out.println("Queue is empty!"); return -1; } int val = arr[front]; front = (front + 1) % capacity; // circular wrapping size--; return val; } public int peek() { if (isEmpty()) return -1; return arr[front]; } public boolean isEmpty() { return size == 0; } public int size() { return size; }}Notice the % capacity in enqueue and dequeue — that is what makes it a Circular Queue. Without this, once the rear reaches the end of the array, you cannot add more even if front has moved forward and freed up space.Way 6: Implementing Queue Using Two StacksThis is a very popular interview question — implement a Queue using two stacks. The idea is to use one stack for enqueue and another for dequeue.class QueueUsingTwoStacks { Stack<Integer> s1 = new Stack<>(); // for enqueue Stack<Integer> s2 = new Stack<>(); // for dequeue public void enqueue(int val) { s1.push(val); // always push to s1 } public int dequeue() { if (s2.isEmpty()) { // transfer all elements from s1 to s2 // this reverses the order, giving FIFO behavior while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.pop(); } public int peek() { if (s2.isEmpty()) { while (!s1.isEmpty()) { s2.push(s1.pop()); } } return s2.peek(); } public boolean isEmpty() { return s1.isEmpty() && s2.isEmpty(); }}Why does this work?When you transfer elements from s1 to s2, the order reverses. The element that was added first to s1 ends up on top of s2 — which means it gets dequeued first. FIFO achieved using two LIFOs!Amortized time complexity: Each element is pushed and popped at most twice (once in s1, once in s2). So dequeue is O(1) amortized even though individual calls might take O(n).This is LeetCode 232 — Implement Queue using Stacks.Queue vs Stack — Side by SideFeatureQueueStackPrincipleFIFO — First In First OutLIFO — Last In First OutInsert atRearTopRemove fromFrontTopReal lifeLine of peopleStack of platesJava classLinkedList, ArrayDequeStack, ArrayDequeMain useBFS, schedulingDFS, backtracking, parsingPeekFront elementTop elementBFS — The Most Important Application of QueueBreadth First Search (BFS) is the single most important algorithm that uses a Queue. Understanding BFS is why Queue matters so much in DSA.BFS explores a graph or tree level by level — all nodes at distance 1 first, then all at distance 2, and so on. A Queue naturally enforces this level-by-level behavior.public void bfs(int start, List<List<Integer>> graph) { Queue<Integer> queue = new LinkedList<>(); boolean[] visited = new boolean[graph.size()]; queue.offer(start); visited[start] = true; while (!queue.isEmpty()) { int node = queue.poll(); // process front node System.out.print(node + " "); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); // add unvisited neighbors to rear } } }}Why Queue and not Stack for BFS? Queue ensures you process all neighbors of a node before going deeper. Stack would take you deep into one path first — that is DFS, not BFS. The FIFO property is what guarantees level-by-level exploration.BFS with Queue is used in:Shortest path in unweighted graphsLevel order traversal of treesFinding connected componentsWord ladder problemsRotten oranges, flood fill, and matrix BFS problemsLevel Order Traversal — BFS on TreesOne of the most common Queue problems in interviews is Level Order Traversal of a binary tree.public List<List<Integer>> levelOrder(TreeNode root) { List<List<Integer>> result = new ArrayList<>(); if (root == null) return result; Queue<TreeNode> queue = new LinkedList<>(); queue.offer(root); while (!queue.isEmpty()) { int levelSize = queue.size(); // number of nodes at current level List<Integer> level = new ArrayList<>(); for (int i = 0; i < levelSize; i++) { TreeNode node = queue.poll(); level.add(node.val); if (node.left != null) queue.offer(node.left); if (node.right != null) queue.offer(node.right); } result.add(level); } return result;}The key trick here is using queue.size() at the start of each while loop iteration to know exactly how many nodes belong to the current level. Process exactly that many nodes, then move to the next level.This is LeetCode 102 — Binary Tree Level Order Traversal.Sliding Window Maximum — Monotonic DequeOne of the most impressive Queue applications is the Sliding Window Maximum problem using a Monotonic Deque. This is the queue equivalent of the Monotonic Stack pattern you saw in stack problems.The idea — maintain a deque that stores indices of elements in decreasing order. The front always holds the index of the maximum element in the current window.public int[] maxSlidingWindow(int[] nums, int k) { Deque<Integer> deque = new ArrayDeque<>(); // stores indices int[] result = new int[nums.length - k + 1]; int idx = 0; for (int i = 0; i < nums.length; i++) { // remove indices that are out of the current window while (!deque.isEmpty() && deque.peekFirst() < i - k + 1) { deque.pollFirst(); } // remove indices whose values are smaller than current // they can never be the maximum for any future window while (!deque.isEmpty() && nums[deque.peekLast()] < nums[i]) { deque.pollLast(); } deque.offerLast(i); // window is fully formed, record maximum (front of deque) if (i >= k - 1) { result[idx++] = nums[deque.peekFirst()]; } } return result;}This gives O(n) time for what would otherwise be an O(n×k) problem. This is LeetCode 239 — Sliding Window Maximum.Java Queue Interface — Complete Method ReferenceHere is every method you will ever need from Java's Queue and Deque interfaces:Queue Methods:offer(e) — add to rear, returns false if full (preferred over add) poll() — remove from front, returns null if empty (preferred over remove) peek() — view front without removing, returns null if empty (preferred over element) isEmpty() — returns true if no elements size() — returns number of elements contains(o) — returns true if element existsDeque Additional Methods:offerFirst(e) — add to front offerLast(e) — add to rear pollFirst() — remove from front pollLast() — remove from rear peekFirst() — view front peekLast() — view rearPriorityQueue Specific:offer(e) — add with natural ordering or custom comparator poll() — remove element with highest priority peek() — view highest priority element without removingCommon Interview Questions About QueueThese are the questions interviewers ask to test your understanding of queues conceptually — not just coding.Q1. What is the difference between Queue and Stack? Queue is FIFO — elements are removed in the order they were added. Stack is LIFO — the most recently added element is removed first. Queue removes from the front, Stack removes from the top.Q2. Why is ArrayDeque preferred over LinkedList for Queue in Java? ArrayDeque uses a resizable array internally and has better cache locality and no node allocation overhead. LinkedList creates a new node object for every element added, which means more garbage collection pressure. ArrayDeque is faster in practice for most Queue use cases.Q3. When would you use a PriorityQueue instead of a regular Queue? When the order of processing depends on priority rather than arrival order. For example in a hospital, critical patients are treated before minor cases regardless of when they arrived. Or in Dijkstra's algorithm, always processing the shortest known distance first.Q4. How is Queue used in BFS? BFS uses a Queue to explore nodes level by level. The starting node is enqueued first. Each time a node is dequeued, all its unvisited neighbors are enqueued. Since Queue is FIFO, all neighbors of a node are processed before going deeper — guaranteeing level-by-level exploration.Q5. What is the difference between poll() and remove() in Java Queue? Both remove the front element. remove() throws NoSuchElementException if the queue is empty. poll() returns null instead of throwing. Always use poll() for safer code.Q6. Can a Queue have duplicates? Yes. Queue does not have any restriction on duplicate values unlike Sets. The same value can appear multiple times in a Queue.Q7. What is a Blocking Queue and when is it used? A Blocking Queue is a thread-safe Queue used in multi-threaded applications. When a thread tries to dequeue from an empty queue, it blocks (waits) until an element is available. When a thread tries to enqueue into a full queue, it blocks until space is available. Used in Producer-Consumer patterns.Top LeetCode Problems on QueueHere are the most important LeetCode problems that use Queue — organized from beginner to advanced:Beginner Level:232. Implement Queue using Stacks — implement Queue with two stacks, classic interview question225. Implement Stack using Queues — reverse of 232, implement Stack using Queue933. Number of Recent Calls — sliding window with QueueIntermediate Level:102. Binary Tree Level Order Traversal — BFS on tree, must know107. Binary Tree Level Order Traversal II — same but bottom up994. Rotting Oranges — multi-source BFS on grid1091. Shortest Path in Binary Matrix — BFS shortest path542. 01 Matrix — multi-source BFS, distance to nearest 0127. Word Ladder — BFS on word graph, classicAdvanced Level:239. Sliding Window Maximum — monotonic deque, must know862. Shortest Subarray with Sum at Least K — monotonic deque with prefix sums407. Trapping Rain Water II — 3D BFS with priority queue787. Cheapest Flights Within K Stops — BFS with constraintsQueue Cheat Sheet — Everything at a GlanceCreate a Queue:Queue<Integer> q = new LinkedList<>(); // standardQueue<Integer> q = new ArrayDeque<>(); // faster, preferredDeque<Integer> dq = new ArrayDeque<>(); // double endedPriorityQueue<Integer> pq = new PriorityQueue<>(); // min heapPriorityQueue<Integer> pq = new PriorityQueue<>((a,b) -> b-a); // max heapCore Operations:q.offer(x); // enqueueq.poll(); // dequeueq.peek(); // front elementq.isEmpty(); // check emptyq.size(); // number of elementsDeque Operations:dq.offerFirst(x); // add to frontdq.offerLast(x); // add to reardq.pollFirst(); // remove from frontdq.pollLast(); // remove from reardq.peekFirst(); // view frontdq.peekLast(); // view rearBFS Template:Queue<Integer> queue = new LinkedList<>();queue.offer(start);visited[start] = true;while (!queue.isEmpty()) { int node = queue.poll(); for (int neighbor : graph.get(node)) { if (!visited[neighbor]) { visited[neighbor] = true; queue.offer(neighbor); } }}ConclusionQueue is one of those data structures that appears simple on the surface but has incredible depth once you start exploring its variations and applications. From the basic FIFO concept to Circular Queues, Deques, Priority Queues, Monotonic Deques, and BFS — each layer adds a new tool to your problem-solving arsenal.Here is the learning path to follow based on everything covered in this guide:Start with understanding FIFO vs LIFO and when each applies. Then get comfortable with Java's Queue interface — offer, poll, peek. Practice the BFS template until it feels automatic. Then move to Level Order Traversal problems. Once BFS clicks, tackle multi-source BFS problems like Rotting Oranges. Finally learn the Monotonic Deque pattern for sliding window problems.Master these and you will handle every Queue problem in any coding interview with confidence.

QueueData StructureJavaBFSDequePriority QueueCircular Queue
LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

IntroductionThe Letter Case Permutation problem is a classic example of recursion and backtracking, often asked in coding interviews and frequently searched by learners preparing for platforms like LeetCode.This problem helps in understanding:Decision-making at each stepRecursive branchingString manipulationIn this article, we’ll break down the intuition, visualize the decision process using your decision tree, and implement an efficient Java solution.🔗 Problem LinkLeetCode: Letter Case PermutationProblem StatementGiven a string s, you can transform each alphabet character into:LowercaseUppercaseDigits remain unchanged.👉 Return all possible strings formed by these transformations.ExamplesExample 1Input:s = "a1b2"Output:["a1b2","a1B2","A1b2","A1B2"]Example 2Input:s = "3z4"Output:["3z4","3Z4"]Key InsightAt each character:If it's a digit → only one choiceIf it's a letter → two choices:lowercase OR uppercaseSo total combinations:2^(number of letters)Intuition (Using Your Decision Tree)For input: "a1b2"Start from index 0: "" / \ "a" "A" | | "a1" "A1" / \ / \ "a1b" "a1B" "A1b" "A1B" | | | | "a1b2" "a1B2" "A1b2" "A1B2"Understanding the TreeAt 'a' → branch into 'a' and 'A''1' → no branching (digit)'b' → again branching'2' → no branching📌 Leaf nodes = final answersApproach: Recursion + BacktrackingIdeaTraverse the string character by characterIf digit → move forwardIf letter → branch into:lowercaseuppercaseJava Codeimport java.util.*;class Solution { // List to store all results List<String> lis = new ArrayList<>(); public void solve(String s, int ind, String ans) { // Base case: reached end of string if (ind == s.length()) { lis.add(ans); // store generated string return; } char ch = s.charAt(ind); // If character is a digit → only one option if (ch >= '0' && ch <= '9') { solve(s, ind + 1, ans + ch); } else { // Choice 1: convert to lowercase solve(s, ind + 1, ans + Character.toLowerCase(ch)); // Choice 2: convert to uppercase solve(s, ind + 1, ans + Character.toUpperCase(ch)); } } public List<String> letterCasePermutation(String s) { solve(s, 0, ""); // start recursion return lis; }}Step-by-Step ExecutionFor "a1b2":Start → ""'a' → "a", "A"'1' → "a1", "A1"'b' → "a1b", "a1B", "A1b", "A1B"'2' → final stringsComplexity AnalysisTime Complexity: O(2^n)(n = number of letters)Space Complexity: O(2^n)(for storing results)Why This Approach WorksRecursion explores all possibilitiesEach letter creates a branching pointDigits pass through unchangedBacktracking ensures all combinations are generatedKey TakeawaysThis is a binary decision recursion problemLetters → 2 choicesDigits → 1 choiceDecision tree directly maps to recursionPattern similar to:SubsetsPermutations with conditionsWhen This Problem Is AskedCommon in:Coding interviewsRecursion/backtracking roundsString manipulation problemsConclusionThe Letter Case Permutation problem is a perfect example of how recursion can be used to explore all possible combinations efficiently.Once the decision tree is clear, the implementation becomes straightforward. This pattern is widely used in many advanced problems, making it essential to master.Frequently Asked Questions (FAQs)1. Why don’t digits create branches?Because they have only one valid form.2. What is the main concept used?Recursion with decision-making (backtracking).3. Can this be solved iteratively?Yes, using BFS or iterative expansion, but recursion is more intuitive.

LeetCodeMediumJavaRecursion
Building an AI Art Detective: From Kaggle Data to Deployed Vision Transformer (ViT)

Building an AI Art Detective: From Kaggle Data to Deployed Vision Transformer (ViT)

IntroductionThe rise of generative AI has created a new frontier for verification. As developers, we are no longer just building features; we are building filters for reality. This project explores how to fine-tune Google’s Vision Transformer (ViT) to detect the subtle "fingerprints" of AI-generated art.By the end of this guide, you will understand how to orchestrate a full ML lifecycle: data ingestion, model fine-tuning, threshold calibration, and cloud deployment.1. Data Engineering: The "Super Dataset"A model is only as good as its training data. For this project, I used the AI Generated vs Real Images dataset (2.5GB).To ensure a reproducible pipeline, I automated the download and extraction directly within the environment. This is a critical step for "Headless" training in cloud environments like Google Colab or Kaggle Kernels.import osimport zipfile# Automating Data Ingestion via Kaggle APIdataset_name = "cashbowman/ai-generated-images-vs-real-images"zip_path = "ai-generated-images-vs-real-images.zip"target_dir = 'super_dataset'print("Downloading 2.5GB high-quality dataset...")!kaggle datasets download -d {dataset_name}if os.path.exists(zip_path):with zipfile.ZipFile(zip_path, 'r') as z:z.extractall(target_dir)os.remove(zip_path) # Storage optimization: remove zip after extractionprint(f"Success! Data structure ready in /{target_dir}")2. Architecture Deep Dive: Why ViT?Standard Convolutional Neural Networks (CNNs) process images through local filters, which are great for textures but often miss "global" errors (like lighting inconsistency or anatomical impossible structures).I chose the google/vit-base-patch16-224 model because it treats an image like a sequence of tokens, similar to how BERT treats words:Patching: The 224x224 image is sliced into 196 patches (each 16x16 pixels).Linear Projection: Each patch is flattened into a 768-dimensional vector.Self-Attention: 12 attention heads allow the model to compare every patch against every other patch. This "global view" helps the model realize that while a texture looks "real," the overall structure is "AI-generated."3. The Training Loop & The "Safety Threshold"Training involved Transfer Learning. We froze the base "knowledge" of the model and only trained the final classification head to recognize the specific artifacts of generative AI.The Critical Logic: Confidence ThresholdingIn a production setting, a "False Positive" (calling a real artist's work AI) is a disaster for user trust. I implemented a 0.75 Confidence Threshold:AI Generated: Only if Probability > 0.75Real Art: The default if the model is uncertain.# The inference logic in app.pydef predict(image):inputs = processor(images=image, return_tensors="pt")outputs = model(**inputs)probs = torch.nn.functional.softmax(outputs.logits, dim=-1)ai_score = probs[0][0].item()real_score = probs[0][1].item()# Custom safety gatelabel = "AI Generated" if ai_score > 0.75 else "Real Art"return label, {"AI": ai_score, "Real": real_score}4. Deployment MLOps: Navigating "Dependency Hell"Deploying on Hugging Face Spaces sounds easy, but it often involves complex version conflicts. Here is the "Stability Recipe" used to overcome common runtime errors (like the audioop removal in Python 3.13):The Requirements RecipeTo ensure the Space remains "Running," we pinned specific versions in requirements.txt:torch --index-url https://download.pytorch.org/whl/cputransformers==4.44.2huggingface_hub==0.24.7gradio==4.44.1pydantic==2.10.6Git LFS (Large File Storage)Since the model weights are ~350MB, standard Git won't track them. We used Git LFS to ensure the binary files were uploaded correctly to the Hugging Face Hub.5. The Full-Stack IntegrationOne of the most powerful features of this deployment is the automatic API. Any modern application can now consume this model as a microservice.Example: Integrating with a React Frontendimport { Client } from "@gradio/client";async function checkArt(imageBlob) {const app = await Client.connect("hugua/vit");const result = await app.predict("/predict", [imageBlob]);console.log("Verdict:", result.data[0]);}Here are the demonstrations of it:Like can you tell is it a Ai image or Real ImageHere is our model prediction you can cross check this image from this youtube video-:Youtube video from where image takenSimilarly here is another exampleHere is our model prediction:Conclusion & Next StepsThis project bridges the gap between raw data science and full-stack engineering. We moved from a 2.5GB raw ZIP file to a live, globally accessible API.The next evolution of this project would be to implement Explainability using Attention Maps, allowing users to see exactly which parts of the image (e.g., the eyes or the background) triggered the "AI" flag.Resources:Dataset: AI vs Real Images (Kaggle)Live Demo: Live LinkDocumentation: Hugging Face Transformers GuideGoogle Collab: Link

MachineLearningComputerVisionNextJSPythonAIVisionTransformer
Ai Assistant Kas