Search Blogs

Showing results for "Binary Strings"

Found 11 results

LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

LeetCode 1980: Find Unique Binary String – Multiple Ways to Generate a Missing Binary Combination

Try the ProblemYou can solve the problem here:https://leetcode.com/problems/find-unique-binary-string/Problem DescriptionYou are given an array nums containing n unique binary strings, where each string has length n.Your task is to return any binary string of length n that does not appear in the array.Important ConditionsEach string consists only of '0' and '1'.Every string in the array is unique.The output must be a binary string of length n.If multiple valid answers exist, any one of them is acceptable.ExamplesExample 1Inputnums = ["01","10"]Output"11"ExplanationPossible binary strings of length 2:00011011Since "01" and "10" are already present, valid answers could be:00 or 11Example 2Inputnums = ["00","01"]Output"11"Another valid output could be:10Example 3Inputnums = ["111","011","001"]Output101Other valid answers include:000010100110Constraintsn == nums.length1 <= n <= 16nums[i].length == nnums[i] consists only of '0' and '1'All strings in nums are uniqueImportant ObservationThe total number of binary strings of length n is:2^nBut the array contains only:n stringsSince 2^n grows very quickly and n ≤ 16, there are many possible binary strings missing from the array. Our goal is simply to construct one of those missing strings.Thinking About the ProblemBefore jumping into coding, it's useful to think about different strategies that could help us generate a binary string that does not appear in the array.Possible Ways to Think About the ProblemWhen approaching this problem, several ideas may come to mind:Generate all possible binary strings of length n and check which one is missing.Store all strings in a HashSet or HashMap and construct a candidate string to verify whether it exists.Manipulate existing strings by flipping bits to create new combinations.Use a mathematical trick that guarantees the new string is different from every string in the list.Each of these approaches leads to a different solution strategy.In this article, we will walk through these approaches and understand how they work.Approach 1: Brute Force – Generate All Binary StringsIdeaThe simplest idea is to generate every possible binary string of length n and check whether it exists in the given array.Since there are:2^n possible binary stringsWe can generate them one by one and return the first string that does not appear in nums.StepsConvert numbers from 0 to (2^n - 1) into binary strings.Pad the binary string with leading zeros so its length becomes n.Check if that string exists in the array.If not, return it.Time ComplexityO(2^n * n)This works because n is at most 16, but it is still not the most elegant approach.Approach 2: HashMap + Bit Flipping (My Approach)IdeaWhile solving this problem, another idea is to store all given binary strings inside a HashMap for quick lookup.Then we can try to construct a new binary string by flipping bits from the existing strings.The intuition is simple:If the current character is '0', change it to '1'.If the current character is '1', change it to '0'.By flipping bits at different positions, we attempt to build a new binary combination.Once the string is constructed, we check whether it already exists in the map.If the generated string does not exist, we return it as our answer.Java Implementation (My Solution)class Solution { public String findDifferentBinaryString(String[] nums) { int len = nums[0].length(); // HashMap to store all given binary strings HashMap<String, Integer> mp = new HashMap<>(); for(int i = 0; i < nums.length; i++){ mp.put(nums[i], i); } int cou = 0; String ans = ""; for(int i = 0; i < nums.length; i++){ if(cou < len){ // Flip the current bit if(nums[i].charAt(cou) == '0'){ ans += '1'; cou++; } else{ ans += '0'; cou++; } }else{ // If generated string does not exist in map if(!mp.containsKey(ans)){ return ans; } // Reset and try building again ans = ""; cou = 0; } } return ans; }}Time ComplexityO(n²)Because we iterate through the array and perform string operations.Space ComplexityO(n)For storing the strings in the HashMap.Approach 3: Cantor’s Diagonalization (Optimal Solution)IdeaA clever mathematical observation allows us to construct a string that must differ from every string in the array.We build a new string such that:The first character differs from the first string.The second character differs from the second string.The third character differs from the third string.And so on.By ensuring that the generated string differs from each string at least at one position, it is guaranteed not to exist in the array.This technique is known as Cantor’s Diagonalization.Java Implementationclass Solution { public String findDifferentBinaryString(String[] nums) { int n = nums.length; StringBuilder result = new StringBuilder(); for(int i = 0; i < n; i++){ // Flip the diagonal bit if(nums[i].charAt(i) == '0'){ result.append('1'); } else{ result.append('0'); } } return result.toString(); }}Time ComplexityO(n)We only traverse the array once.Space ComplexityO(n)For storing the resulting string.Comparison of ApproachesApproachTime ComplexitySpace ComplexityNotesBrute ForceO(2^n * n)O(n)Simple but inefficientHashMap + Bit FlippingO(n²)O(n)Constructive approachCantor DiagonalizationO(n)O(n)Optimal and elegantKey TakeawaysThis problem highlights an interesting concept in algorithm design:Sometimes the best solution is not searching for the answer but constructing one directly.By understanding the structure of the input, we can generate a result that is guaranteed to be unique.ConclusionThe Find Unique Binary String problem can be solved using multiple strategies, ranging from brute force enumeration to clever mathematical construction.While brute force works due to the small constraint (n ≤ 16), more elegant solutions exist. Using hashing or constructive approaches improves efficiency and demonstrates deeper algorithmic thinking.Among all approaches, the Cantor Diagonalization technique provides the most efficient and mathematically guaranteed solution.Understanding problems like this helps strengthen skills in string manipulation, hashing, and constructive algorithms, which are commonly tested in coding interviews.

Binary StringsHashingCantor DiagonalizationLeetCodeMedium
Check if Binary String Has at Most One Segment of Ones – Java Solution (LeetCode 1784)

Check if Binary String Has at Most One Segment of Ones – Java Solution (LeetCode 1784)

Try the QuestionBefore reading the solution, try solving the problem yourself:👉 https://leetcode.com/problems/check-if-binary-string-has-at-most-one-segment-of-ones/Attempting the problem first helps build your problem-solving intuition, which is essential for coding interviews.Problem DescriptionYou are given a binary string s, which means the string contains only two characters:'0' and '1'The string does not contain leading zeros, meaning the first character is always 1.Your task is to determine whether the string contains at most one contiguous segment of 1s.If the string has only one continuous group of 1s, return:trueIf the string contains multiple separated groups of 1s, return:falseUnderstanding the Problem ClearlyThe key phrase in the problem is:"at most one contiguous segment of ones"A segment means a continuous block of characters without interruption.For example:111This is one segment of ones.But if 1s are separated by 0s and appear again later, then there are multiple segments.Example WalkthroughExample 1Inputs = "1001"Structure1 0 0 1Here we have:Segment 1 → "1"Segment 2 → "1"There are two separate segments of ones, which violates the condition.OutputfalseExample 2Inputs = "110"Structure1 1 0There is only one continuous block of ones.OutputtrueVisual IntuitionThe string is valid only if it follows this pattern:111111000000Meaning:[One block of 1s] + [any number of 0s]But the string becomes invalid if we see something like:111001011Because here:1s → stop → 0 → start again → 1Which means two segments of ones exist.Key ObservationSince the string starts with 1, the valid structure must look like:111...111000...000Once we encounter the first 0, we should never see 1 again.If we ever see:0 → followed by → 1then a new segment of ones has started, which means the answer is false.Intuition Behind the SolutionThe logic becomes very simple:Traverse the string from left to right.Keep track of the previous character.If we ever see the pattern:0 followed by 1then it means a new segment of ones has started, so return false.If we finish scanning the string without seeing this pattern, the string is valid.Java Implementationclass Solution { public boolean checkOnesSegment(String s) { if(s.length() == 1) return true; if(s.length() == 2 && s.charAt(1) == '1'){ return true; } if(s.length() == 2 && s.charAt(1) == '0'){ return true; } char prev = '0'; for(int i = 0; i < s.length() - 1; i++){ prev = s.charAt(i); if(s.charAt(i+1) == '1' && prev == '0'){ return false; } } return true; }}Step-by-Step Code Explanation1. Handle Small Edge CasesIf the string length is 1:"1"There is obviously only one segment, so we return:trueIf the string length is 2, both cases are valid:"11""10"Because neither creates multiple segments of ones.2. Traverse the StringWe loop through the string:for(int i = 0; i < s.length() - 1; i++)At every step, we compare:current characternext character3. Detect the Invalid PatternWe check this condition:if(s.charAt(i+1) == '1' && prev == '0')This means we found:0 → 1Which indicates a new segment of ones has started, so we return:false4. If No Violation is FoundIf we finish the loop without encountering the pattern:0 → 1then the string contains only one contiguous segment of ones, so we return:trueTime ComplexityO(n)Where:n = length of the stringWe traverse the string only once.Space ComplexityO(1)No extra space is used except a few variables.A Simpler Observation (Bonus Insight)A simpler trick for this problem is checking if the string contains:"01"Why?Because:111000 → validBut:1110011contains:01 → followed by another 1Which means a second segment of ones exists.Key Takeaways✔ Binary strings contain only 0 and 1✔ A segment means a continuous block✔ Valid strings contain only one block of ones✔ The invalid pattern is 0 followed by 1✔ The solution works with one linear scanFinal ThoughtsAlthough this problem is categorized as easy, it tests an important concept:Pattern detection while traversing strings.Problems like this are common in interviews because they evaluate:Logical reasoningEdge case handlingString traversal techniquesMastering such problems helps build a strong foundation for more complex string and pattern-matching algorithms.

LeetCodeJavaString ProblemsBinary StringsEasy
Minimum Changes to Make Alternating Binary String – LeetCode 1758 Explained

Minimum Changes to Make Alternating Binary String – LeetCode 1758 Explained

Try This QuestionBefore reading the solution, try solving the problem yourself on LeetCode:👉 https://leetcode.com/problems/minimum-changes-to-make-alternating-binary-string/Problem StatementYou are given a binary string s consisting only of characters '0' and '1'.In one operation, you can change any '0' to '1' or '1' to '0'.A string is called alternating if no two adjacent characters are the same.Example of Alternating Strings01010101011010Example of Non-Alternating Strings0001001111101Your task is to return the minimum number of operations required to make the string alternating.Example WalkthroughExample 1Inputs = "0100"Possible fix:0101Only the last character needs to change, so the answer is:Output1Example 2Inputs = "10"The string is already alternating.Output0Example 3Inputs = "1111"Possible alternating strings:01011010Minimum operations needed = 2Output2Key ObservationAn alternating binary string can only have two possible patterns:Pattern 10101010101...Pattern 21010101010...So instead of trying many combinations, we only need to check:1️⃣ How many changes are required to convert s → "010101..."2️⃣ How many changes are required to convert s → "101010..."Then we return the minimum of the two.ApproachStep 1Generate two possible alternating strings:s1 = "010101..."s2 = "101010..."Both will be of the same length as the input string.Step 2Compare the original string with both patterns.Count mismatches.For example:s = 0100s1 = 0101Mismatch count = 1Step 3Repeat for the second pattern.Finally return:min(mismatch1, mismatch2)Intuition Behind the SolutionInstead of flipping characters randomly, we compare the string with the two valid alternating possibilities.Why only two?Because:Alternating strings must start with either 0 or 1After that, the pattern is fixed.So we simply compute which pattern requires fewer changes.This makes the solution efficient and simple.Java Implementationclass Solution { public int minOperations(String s) { int co1 = 0; int co2 = 0; String s1 = ""; String s2 = ""; for(int i = 0; i < s.length(); i++){ if(i % 2 == 0){ s1 += "0"; } else { s1 += "1"; } } for(int i = 0; i < s.length(); i++){ if(i % 2 == 0){ s2 += "1"; } else { s2 += "0"; } } for(int i = 0; i < s.length(); i++){ if(s.charAt(i) != s1.charAt(i)){ co1++; } } for(int i = 0; i < s.length(); i++){ if(s.charAt(i) != s2.charAt(i)){ co2++; } } return Math.min(co1, co2); }}Complexity AnalysisTime ComplexityO(n)We iterate through the string a few times.Space ComplexityO(n)Because we create two extra strings of size n.Optimized Approach (Better Interview Answer)We can avoid creating extra strings and calculate mismatches directly.Optimized Java Codeclass Solution { public int minOperations(String s) { int pattern1 = 0; int pattern2 = 0; for(int i = 0; i < s.length(); i++){ char expected1 = (i % 2 == 0) ? '0' : '1'; char expected2 = (i % 2 == 0) ? '1' : '0'; if(s.charAt(i) != expected1) pattern1++; if(s.charAt(i) != expected2) pattern2++; } return Math.min(pattern1, pattern2); }}Space Complexity NowO(1)No extra strings required.Why This Problem Is ImportantThis problem teaches important interview concepts:✔ Pattern observation✔ Greedy thinking✔ String manipulation✔ Optimization techniquesMany companies ask similar pattern-based string problems.Final ThoughtsThe trick in this problem is realizing that only two alternating patterns exist. Once you identify that, the problem becomes straightforward.Instead of trying multiple modifications, you simply compare and count mismatches.This leads to a clean and efficient O(n) solution.If you are preparing for coding interviews, practicing problems like this will improve your pattern recognition skills, which is a key skill for solving medium and hard problems later.Happy Coding 🚀

LeetCodeBinary StringGreedy AlgorithmJavaEasy
Longest Subarray of 1's After Deleting One Element – Sliding Window Approach

Longest Subarray of 1's After Deleting One Element – Sliding Window Approach

IntroductionLeetCode 1493: Longest Subarray of 1's After Deleting One Element is a neat sliding window problem that tests your ability to dynamically adjust a window while handling a constraint: deleting exactly one element.The task is to find the longest subarray of 1's you can get after deleting one element from the array.This problem is an excellent example of how sliding window with zero counting can convert a potentially brute-force solution into an O(n) linear solution.If you’d like to try solving the problem first, you can attempt it here:Try the problem on LeetCode:https://leetcode.com/problems/longest-subarray-of-1s-after-deleting-one-element/Problem UnderstandingYou are given:A binary array nums containing only 0’s and 1’sYou must delete exactly one elementYour task: Return the length of the longest non-empty subarray of 1’s after deleting one element.Examples:Input: nums = [1,1,0,1]Output: 3Explanation: Delete element at index 2 → [1,1,1]. Longest subarray of 1's = 3.Input: nums = [0,1,1,1,0,1,1,0,1]Output: 5Explanation: Delete element at index 4 → [0,1,1,1,1,1,0,1]. Longest subarray of 1's = 5.Input: nums = [1,1,1]Output: 2Explanation: Must delete one element → longest subarray = 2.A naive approach would try removing each element and scanning for the longest subarray →Time Complexity: O(n²) → too slow for nums.length up to 10⁵Inefficient for large arraysKey Idea: Sliding Window with At Most One ZeroNotice the following:Deleting one element is equivalent to allowing at most one zero in the subarrayWe can use a sliding window [i, j] and a counter z for zeros in the windowExpand j while z <= 1If z > 1, shrink the window from the left until z <= 1The length of the window (j - i) gives the maximum length of consecutive 1’s after deleting one elementIntuition:Only one zero is allowed in the window because deleting that zero would turn the window into all 1'sThis converts the problem into a linear sliding window problem with zero countingApproach (Step-by-Step)Initialize i = 0, j = 0 for window pointersInitialize z = 0 → number of zeros in current windowInitialize co = 0 → maximum length of valid subarrayIterate j over nums:If nums[j] == 0, increment zCheck z:If z <= 1: window is valid → update co = max(co, j - i)If z > 1: shrink window from i until z <= 1Continue expanding the windowReturn co as the maximum length after deleting one elementOptimization:Only need one zero counter and window pointersAvoid recomputing subarray lengths repeatedlyImplementation (Java)class Solution {public int longestSubarray(int[] nums) {int i = 0, j = 0; // window pointersint co = 0; // max lengthint z = 0; // count of zeros in windowwhile (j < nums.length) {if (nums[j] == 0) {z++; // increment zero count}if (z <= 1) {co = Math.max(co, j - i); // valid windowj++;} else {// shrink window until at most one zerowhile (z > 1) {if (nums[i] == 0) {z--;}i++;}co = Math.max(co, j - i);j++;}}return co;}}Dry Run ExampleInput:nums = [1,1,0,1]Execution:Window [i, j]Zeros zValid?Max length co[0,0] → [1]0Yes0[0,1] → [1,1]0Yes1[0,2] → [1,1,0]1Yes2[0,3] → [1,1,0,1]1Yes3Output:3Complexity AnalysisTime Complexity: O(n) → Each element is visited at most twice (i and j)Space Complexity: O(1) → Only counters and pointers usedEdge Cases ConsideredArray of all 1’s → must delete one → max length = n - 1Array of all 0’s → return 0Single element arrays → return 0 (because deletion required)Zeros at the start/end of array → handled by sliding windowSliding Window Pattern ImportanceThis problem is a great example of sliding window with limited violations:Maintain a window satisfying a constraint (at most one zero)Expand/shrink dynamicallyCompute max length without scanning all subarraysIt’s directly related to problems like:Max consecutive ones with k flipsLongest substring with at most k distinct charactersSubarray problems with limited replacementsConclusionBy tracking zeros with a sliding window, we efficiently find the longest subarray of 1’s after deleting one element in O(n) time.This pattern is reusable in many binary array and string problems, making it a must-know technique for coding interviews.

SlidingWindowBinaryArrayLeetCodeMedium
Max Consecutive Ones III – Sliding Window with Limited Flips

Max Consecutive Ones III – Sliding Window with Limited Flips

IntroductionLeetCode 1004: Max Consecutive Ones III is a classic sliding window problem that challenges your understanding of arrays, window manipulation, and frequency counting.The goal is to find the longest subarray of consecutive 1's in a binary array if you are allowed to flip at most K zeros to 1’s.This problem is an excellent example of transforming a brute-force solution into a linear-time efficient approach using the sliding window pattern.If you’d like to try solving the problem first, you can attempt it here: Try the problem on LeetCode: https://leetcode.com/problems/max-consecutive-ones-iii/Problem UnderstandingYou are given:A binary array nums containing only 0's and 1'sAn integer k representing the maximum number of zeros you can flipYou need to return the length of the longest contiguous subarray of 1's after flipping at most k zeros.Examples:Input: nums = [1,1,1,0,0,0,1,1,1,1,0], k = 2Output: 6Explanation: Flip two zeros to get [1,1,1,0,0,1,1,1,1,1,1].Longest consecutive ones = 6.Input: nums = [0,0,1,1,0,0,1,1,1,0,1,1,0,0,0,1,1,1,1], k = 3Output: 10Explanation: Flip three zeros to get longest consecutive ones of length 10.A naive approach would check every possible subarray, flip zeros, and count consecutive ones:Time Complexity: O(n²) → too slow for large arraysInefficient for constraints up to 10⁵ elementsKey Idea: Sliding Window with Zero CountInstead of brute-force, notice that:We only care about how many zeros are in the current windowWe can maintain a sliding window [i, j]Keep a counter z for zeros in the windowExpand the window by moving jIf z exceeds k, shrink the window from the left by moving i and decrementing z for each zero removedIntuition:The window always contains at most k zerosThe length of the window gives the maximum consecutive ones achievable with flipsThis allows linear traversal of the array with O(1) space, making it optimal.Approach (Step-by-Step)Initialize pointers: i = 0, j = 0Initialize z = 0 (zeros count in current window) and co = 0 (max length)Iterate j from 0 to nums.length - 1:If nums[j] == 0, increment zCheck z:If z <= k: window is valid → update co = max(co, j - i + 1)Else: shrink window by moving i until z <= k, decrement z for zeros leaving windowContinue expanding window with jReturn co as the maximum consecutive onesOptimization:Only need one variable for zeros countAvoids recomputing sums or scanning subarrays repeatedlyImplementation (Java)class Solution { public int longestOnes(int[] nums, int k) { int co = 0; // maximum length of valid window int i = 0, j = 0; // window pointers int z = 0; // count of zeros in current window while (j < nums.length) { if (nums[j] == 0) { z++; // increment zeros count } if (z <= k) { co = Math.max(co, j - i + 1); // valid window j++; } else { // shrink window until zeros <= k while (z > k) { if (nums[i] == 0) { z--; } i++; } co = Math.max(co, j - i + 1); j++; } } return co; }}Dry Run ExampleInput:nums = [1,1,1,0,0,0,1,1,1,1,0], k = 2Execution:Window [i, j]Zeros zValid?Max length co[0,0] → [1]0Yes1[0,2] → [1,1,1]0Yes3[0,3] → [1,1,1,0]1Yes4[0,4] → [1,1,1,0,0]2Yes5[0,5] → [1,1,1,0,0,0]3No → shrink i5[3,8] → [0,0,1,1,1,1]2Yes6Output:6Complexity AnalysisTime Complexity: O(n) → Each element is visited at most twice (once when j moves, once when i moves)Space Complexity: O(1) → Only counters and pointers usedEdge Cases Consideredk = 0 → cannot flip any zeros, just count consecutive onesArray with all 1’s → return full lengthArray with all 0’s → return min(k, length)Single element arrays → works correctlySliding Window Pattern ImportanceThis problem is a perfect example of the sliding window pattern:Use a window to track a condition (max zeros allowed)Expand and shrink dynamically based on constraintsEfficiently computes maximum/minimum contiguous subarray lengthsIt also demonstrates counting with limited violations – a key interview concept.ConclusionBy tracking zeros with a sliding window, we convert a naive O(n²) problem into O(n) linear time.Understanding this pattern allows you to solve:Max consecutive ones/zeros problemsLongest substring/subarray with constraintsSubarray problems with limited replacements or violationsOnce mastered, this approach applies to many binary array and string problems efficiently.

SlidingWindowBinaryArrayLeetCodeMedium
Check if All Characters Have Equal Number of Occurrences – Frequency Map Approach (LeetCode 1941)

Check if All Characters Have Equal Number of Occurrences – Frequency Map Approach (LeetCode 1941)

🔗 Problem LinkLeetCode 1941 – Check if All Characters Have Equal Number of Occurrences 👉 https://leetcode.com/problems/check-if-all-characters-have-equal-number-of-occurrences/IntroductionThis is one of those problems that looks very simple at first glance — and it actually is — but it helps strengthen your understanding of frequency counting using HashMap.The problem asks us to determine whether all characters in a string occur the same number of times.No sliding window. No binary search. Just clean frequency logic.But even simple problems help build strong foundations.📌 Problem UnderstandingA string is considered "good" if:Every character that appears in the stringAppears the same number of timesIf even one character has a different frequency → return false.Example 1Input: s = "abacbc"Output: trueCharacter counts:a → 2b → 2c → 2All equal → ✔ trueExample 2Input: s = "aaabb"Output: falseCharacter counts:a → 3b → 2Not equal → ✘ false🧠 Approach & IntuitionWhen I saw this problem, my thinking was:Count the frequency of every character.Compare all frequencies.If all are equal → return true.The important part is choosing a reference frequency and comparing everything against it.💻 Your Codeclass Solution { public boolean areOccurrencesEqual(String s) { HashMap<Character,Integer> mp = new HashMap<>(); int ref =0; char c = s.charAt(0); for(int i =0 ; i< s.length();i++){ if(c == s.charAt(i)){ ref++; } mp.put(s.charAt(i),mp.getOrDefault(s.charAt(i),0)+1); } for(int a:mp.values()){ if(ref != a){ return false; } } return true; }}🔍 Step-by-Step Explanation1️⃣ Initialize HashMapHashMap<Character,Integer> mp = new HashMap<>();This stores frequency of each character.2️⃣ Choose Reference Characterchar c = s.charAt(0);int ref = 0;You use the first character as a reference.Then count how many times it appears while also building the frequency map.3️⃣ Build Frequency Mapmp.put(s.charAt(i), mp.getOrDefault(s.charAt(i), 0) + 1);This line increases count for each character.4️⃣ Compare Frequenciesfor(int a : mp.values()){ if(ref != a){ return false; }}If any frequency differs from the reference count → return false.Otherwise → true.⏱ Time and Space ComplexityTime Complexity: O(n)One loop to count frequenciesOne loop over at most 26 charactersSpace Complexity: O(26) ≈ O(1)Only lowercase English letters are allowed.🔥 Small Optimization IdeaYour solution works perfectly.However, we can simplify it slightly:Instead of separately counting the reference frequency, we can:First build the entire frequency map.Take the frequency of the first character from the map.Compare all values with it.Cleaner Versionclass Solution { public boolean areOccurrencesEqual(String s) { HashMap<Character,Integer> mp = new HashMap<>(); for(char ch : s.toCharArray()){ mp.put(ch, mp.getOrDefault(ch, 0) + 1); } int ref = mp.get(s.charAt(0)); for(int freq : mp.values()){ if(freq != ref){ return false; } } return true; }}Same logic — slightly cleaner structure.🎯 Key Learning from This ProblemThis problem reinforces:Frequency counting using HashMapUsing a reference value for comparisonClean loop logicEarly return for optimizationEven though it is an easy problem, it builds the base for harder problems like:Valid AnagramGroup AnagramsFirst Unique CharacterRansom Note🏁 Final ThoughtsProblems like this are not about complexity.They are about:Writing clean logicHandling frequency maps properlyThinking clearly about conditionsMastering easy problems makes medium and hard problems much easier later.

HashMapStringFrequency CountLeetCodeEasy
All Subsequences of a String (Power Set) | Recursion & Backtracking Java Solution

All Subsequences of a String (Power Set) | Recursion & Backtracking Java Solution

IntroductionThe Power Set problem for strings is a classic question in recursion and backtracking, frequently asked in coding interviews and platforms like GeeksforGeeks.In this problem, instead of numbers, we deal with strings and generate all possible subsequences (not substrings). This makes it slightly more interesting and practical for real-world applications like pattern matching, text processing, and combinatorics.In this article, we will cover:Intuition behind subsequencesRecursive (backtracking) approachSorting for lexicographical orderAlternative approachesComplexity analysisProblem StatementGiven a string s of length n, generate all non-empty subsequences of the string.RequirementsReturn only non-empty subsequencesOutput must be in lexicographically sorted orderExamplesExample 1Input:s = "abc"Output:a ab abc ac b bc cExample 2Input:s = "aa"Output:a a aaSubsequence vs Substring (Important)Substring: Continuous charactersSubsequence: Characters can be skippedExample for "abc":Subsequences → a, b, c, ab, ac, bc, abcKey InsightFor every character, we have two choices:Include it OR Exclude itSo total subsequences:2^nWe generate all and then remove the empty string.Approach 1: Recursion (Backtracking)IntuitionAt each index:Skip the characterInclude the characterBuild all combinations recursivelyJava Code (With Explanation)import java.util.*;class Solution { // List to store all subsequences List<String> a = new ArrayList<>(); void sub(String s, int ind, String curr) { // Base case: reached end of string if (ind == s.length()) { a.add(curr); // add current subsequence return; } // Choice 1: Exclude current character sub(s, ind + 1, curr); // Choice 2: Include current character sub(s, ind + 1, curr + s.charAt(ind)); } public List<String> AllPossibleStrings(String s) { // Start recursion sub(s, 0, ""); // Remove empty string (not allowed) a.remove(""); // Sort lexicographically Collections.sort(a); return a; }}Step-by-Step Dry Run (s = "abc")Start: ""→ Exclude 'a' → "" → Exclude 'b' → "" → Exclude 'c' → "" → Include 'c' → "c" → Include 'b' → "b" → Exclude 'c' → "b" → Include 'c' → "bc"→ Include 'a' → "a" → Exclude 'b' → "a" → Exclude 'c' → "a" → Include 'c' → "ac" → Include 'b' → "ab" → Exclude 'c' → "ab" → Include 'c' → "abc"Final Output (After Sorting)a ab abc ac b bc cApproach 2: Bit ManipulationIntuitionEach subsequence can be represented using binary numbers:0 → exclude1 → includeCodeimport java.util.*;class Solution { public List<String> AllPossibleStrings(String s) { List<String> result = new ArrayList<>(); int n = s.length(); int total = 1 << n; // 2^n for (int i = 1; i < total; i++) { // start from 1 to avoid empty StringBuilder sb = new StringBuilder(); for (int j = 0; j < n; j++) { if ((i & (1 << j)) != 0) { sb.append(s.charAt(j)); } } result.add(sb.toString()); } Collections.sort(result); return result; }}Approach 3: Iterative (Expanding List)IdeaStart with empty listFor each character:Add it to all existing subsequencesCodeimport java.util.*;class Solution { public List<String> AllPossibleStrings(String s) { List<String> result = new ArrayList<>(); result.add(""); for (char ch : s.toCharArray()) { int size = result.size(); for (int i = 0; i < size; i++) { result.add(result.get(i) + ch); } } result.remove(""); Collections.sort(result); return result; }}Complexity AnalysisTime Complexity: O(n × 2ⁿ)Space Complexity: O(n × 2ⁿ)Why?Total subsequences = 2ⁿEach subsequence takes O(n) to buildWhy Sorting is RequiredThe recursion generates subsequences in random order, so we sort them:Collections.sort(result);This ensures lexicographical order as required.Key TakeawaysThis is a power set problem for stringsEach character → 2 choicesRecursion = most intuitive approachBit manipulation = most optimized thinkingAlways remove empty string if requiredCommon Interview VariationsSubsets of arrayPermutations of stringCombination sumSubsequence with conditionsConclusionThe Power Set problem is a fundamental building block in recursion and combinatorics. Once you understand the include/exclude pattern, you can solve a wide range of problems efficiently.Mastering this will significantly improve your ability to tackle backtracking and decision tree problems.Frequently Asked Questions (FAQs)1. Why is the empty string removed?Because the problem requires only non-empty subsequences.2. Why is time complexity O(n × 2ⁿ)?Because there are 2ⁿ subsequences and each takes O(n) time to construct.3. Which approach is best?Recursion → best for understandingBit manipulation → best for optimization

GeeksforGeeksRecursionJavaBacktrackingMedium
What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

What Is Dynamic Programming? Origin Story, Real-Life Uses, LeetCode Problems & Complete Beginner Guide

Introduction — Why Dynamic Programming Feels Hard (And Why It Isn't)If you've ever stared at a LeetCode problem, read the solution, understood every single line, and still had absolutely no idea how someone arrived at it — welcome. You've just experienced the classic Dynamic Programming (DP) confusion.DP has a reputation. People treat it like some dark art reserved for competitive programmers or Google engineers. The truth? Dynamic Programming is one of the most logical, learnable, and satisfying techniques in all of computer science. Once it clicks, it really clicks.This guide will take you from zero to genuinely confident. We'll cover where DP came from, how it works, what patterns to learn, how to recognize DP problems, real-world places it shows up, LeetCode problems to practice, time complexity analysis, and the mistakes that trip up even experienced developers.Let's go.The Origin Story — Who Invented Dynamic Programming and Why?The term "Dynamic Programming" was coined by Richard Bellman in the early 1950s while working at RAND Corporation. Here's the funny part: the name was deliberately chosen to sound impressive and vague.Bellman was doing mathematical research that his employer — the US Secretary of Defense, Charles Wilson — would have found difficult to fund if described accurately. Wilson had a well-known distaste for the word "research." So Bellman invented a name that sounded suitably grand and mathematical: Dynamic Programming.In his autobiography, Bellman wrote that he picked the word "dynamic" because it had a precise technical meaning and was also impossible to use negatively. "Programming" referred to the mathematical sense — planning and decision-making — not computer programming.The underlying idea? Break a complex problem into overlapping subproblems, solve each subproblem once, and store the result so you never solve it twice.Bellman's foundational contribution was the Bellman Equation, which underpins not just algorithms but also economics, operations research, and modern reinforcement learning.So the next time DP feels frustrating, remember — even its inventor named it specifically to confuse people. You're in good company.What Is Dynamic Programming? (Simple Definition)Dynamic Programming is an algorithmic technique used to solve problems by:Breaking them down into smaller overlapping subproblemsSolving each subproblem only onceStoring the result (memoization or tabulation)Building up the final solution from those stored resultsThe key insight is overlapping subproblems + optimal substructure.Overlapping subproblems means the same smaller problems come up again and again. Instead of solving them every time (like plain recursion does), DP solves them once and caches the answer.Optimal substructure means the optimal solution to the whole problem can be built from optimal solutions to its subproblems.If a problem has both these properties — it's a DP problem.The Two Approaches to Dynamic Programming1. Top-Down with Memoization (Recursive + Cache)You write a recursive solution exactly as you would naturally, but add a cache (usually a dictionary or array) to store results you've already computed.fib(n):if n in cache: return cache[n]if n <= 1: return ncache[n] = fib(n-1) + fib(n-2)return cache[n]This is called memoization — remember what you computed so you don't repeat yourself.Pros: Natural to write, mirrors the recursive thinking, easy to reason about. Cons: Stack overhead from recursion, risk of stack overflow on large inputs.2. Bottom-Up with Tabulation (Iterative)You figure out the order in which subproblems need to be solved, then solve them iteratively from the smallest up, filling a table.fib(n):dp = [0, 1]for i from 2 to n:dp[i] = dp[i-1] + dp[i-2]return dp[n]This is called tabulation — fill a table, cell by cell, bottom to top.Pros: No recursion overhead, usually faster in practice, easier to optimize space. Cons: Requires thinking about the order of computation upfront.🧩 Dynamic Programming Template CodeBefore diving into how to recognize DP problems, here are ready-to-use Java templates for every major DP pattern. Think of these as your reusable blueprints — every DP problem you ever solve will fit into one of these structures. Just define your state, plug in your recurrence relation, and you are good to go.Template 1 — Top-Down (Memoization)import java.util.HashMap;import java.util.Map;public class TopDownDP {Map<Integer, Integer> memo = new HashMap<>();public int solve(int n) {// Base caseif (n <= 1) return n;// Check cacheif (memo.containsKey(n)) return memo.get(n);// Recurrence relation — change this part for your problemint result = solve(n - 1) + solve(n - 2);// Store in cachememo.put(n, result);return result;}}Template 2 — Bottom-Up (Tabulation)public class BottomUpDP {public int solve(int n) {// Create DP tableint[] dp = new int[n + 1];// Base casesdp[0] = 0;dp[1] = 1;// Fill the table bottom-upfor (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemdp[i] = dp[i - 1] + dp[i - 2];}return dp[n];}}Template 3 — Bottom-Up with Space Optimizationpublic class SpaceOptimizedDP {public int solve(int n) {// Only keep last two values instead of full tableint prev2 = 0;int prev1 = 1;for (int i = 2; i <= n; i++) {// Recurrence relation — change this part for your problemint curr = prev1 + prev2;prev2 = prev1;prev1 = curr;}return prev1;}}Template 4 — 2D DP (Two Sequences or Grid)public class TwoDimensionalDP {public int solve(String s1, String s2) {int m = s1.length();int n = s2.length();// Create 2D DP tableint[][] dp = new int[m + 1][n + 1];// Base cases — first row and columnfor (int i = 0; i <= m; i++) dp[i][0] = i;for (int j = 0; j <= n; j++) dp[0][j] = j;// Fill table cell by cellfor (int i = 1; i <= m; i++) {for (int j = 1; j <= n; j++) {// Recurrence relation — change this part for your problemif (s1.charAt(i - 1) == s2.charAt(j - 1)) {dp[i][j] = dp[i - 1][j - 1];} else {dp[i][j] = 1 + Math.min(dp[i - 1][j],Math.min(dp[i][j - 1], dp[i - 1][j - 1]));}}}return dp[m][n];}}Template 5 — Knapsack Patternpublic class KnapsackDP {public int solve(int[] weights, int[] values, int capacity) {int n = weights.length;// dp[i][w] = max value using first i items with capacity wint[][] dp = new int[n + 1][capacity + 1];for (int i = 1; i <= n; i++) {for (int w = 0; w <= capacity; w++) {// Don't take item idp[i][w] = dp[i - 1][w];// Take item i if it fitsif (weights[i - 1] <= w) {dp[i][w] = Math.max(dp[i][w],values[i - 1] + dp[i - 1][w - weights[i - 1]]);}}}return dp[n][capacity];}}💡 How to use these templates:Step 1 — Identify which pattern your problem fits into. Step 2 — Define what dp[i] or dp[i][j] means in plain English before writing any code. Step 3 — Write your recurrence relation on paper first. Step 4 — Plug it into the matching template above. Step 5 — Handle your specific base cases carefully.🎥 Visual Learning Resource — Watch This Before Moving ForwardIf you prefer learning by watching before reading, this free full-length course by freeCodeCamp is one of the best Dynamic Programming resources on the internet. Watch it alongside this guide for maximum understanding.Credit: freeCodeCamp — a free, nonprofit coding education platform.How to Recognize a Dynamic Programming ProblemAsk yourself these four questions:1. Can I define the problem in terms of smaller versions of itself? If you can write a recursive formula (recurrence relation), DP might apply.2. Do the subproblems overlap? If a naive recursive solution would recompute the same thing many times, DP is the right tool.3. Is there an optimal substructure? Is the best answer to the big problem made up of best answers to smaller problems?4. Are you looking for a count, minimum, maximum, or yes/no answer? DP problems often ask: "What is the minimum cost?", "How many ways?", "Can we achieve X?"Red flag words in problem statements: minimum, maximum, shortest, longest, count the number of ways, can we reach, is it possible, fewest steps.The Core DP Patterns You Must LearnMastering DP is really about recognizing patterns. Here are the most important ones:Pattern 1 — 1D DP (Linear) Problems where the state depends on previous elements in a single sequence. Examples: Fibonacci, Climbing Stairs, House Robber.Pattern 2 — 2D DP (Grid / Two-sequence) Problems with two dimensions of state, often grids or two strings. Examples: Longest Common Subsequence, Edit Distance, Unique Paths.Pattern 3 — Interval DP You consider all possible intervals or subarrays and build solutions from them. Examples: Matrix Chain Multiplication, Burst Balloons, Palindrome Partitioning.Pattern 4 — Knapsack DP (0/1 and Unbounded) You decide whether to include or exclude items under a capacity constraint. Examples: 0/1 Knapsack, Coin Change, Partition Equal Subset Sum.Pattern 5 — DP on Trees State is defined per node; you combine results from children. Examples: Diameter of Binary Tree, House Robber III, Maximum Path Sum.Pattern 6 — DP on Subsets / Bitmask DP State includes a bitmask representing which elements have been chosen. Examples: Travelling Salesman Problem, Shortest Superstring.Pattern 7 — DP on Strings Matching, editing, or counting arrangements within strings. Examples: Longest Palindromic Subsequence, Regular Expression Matching, Wildcard Matching.Top LeetCode Problems to Practice Dynamic Programming (With Links)Here are the essential problems, organized by difficulty and pattern. Solve them in this order.Beginner — Warm UpProblemPatternLinkClimbing Stairs1D DPhttps://leetcode.com/problems/climbing-stairs/Fibonacci Number1D DPhttps://leetcode.com/problems/fibonacci-number/House Robber1D DPhttps://leetcode.com/problems/house-robber/Min Cost Climbing Stairs1D DPhttps://leetcode.com/problems/min-cost-climbing-stairs/Best Time to Buy and Sell Stock1D DPhttps://leetcode.com/problems/best-time-to-buy-and-sell-stock/Intermediate — Core PatternsProblemPatternLinkCoin ChangeKnapsackhttps://leetcode.com/problems/coin-change/Longest Increasing Subsequence1D DPhttps://leetcode.com/problems/longest-increasing-subsequence/Longest Common Subsequence2D DPhttps://leetcode.com/problems/longest-common-subsequence/0/1 Knapsack (via Subset Sum)Knapsackhttps://leetcode.com/problems/partition-equal-subset-sum/Unique Paths2D Grid DPhttps://leetcode.com/problems/unique-paths/Jump Game1D DP / Greedyhttps://leetcode.com/problems/jump-game/Word BreakString DPhttps://leetcode.com/problems/word-break/Decode Ways1D DPhttps://leetcode.com/problems/decode-ways/Edit Distance2D String DPhttps://leetcode.com/problems/edit-distance/Triangle2D DPhttps://leetcode.com/problems/triangle/Advanced — Interview LevelProblemPatternLinkBurst BalloonsInterval DPhttps://leetcode.com/problems/burst-balloons/Regular Expression MatchingString DPhttps://leetcode.com/problems/regular-expression-matching/Wildcard MatchingString DPhttps://leetcode.com/problems/wildcard-matching/Palindrome Partitioning IIInterval DPhttps://leetcode.com/problems/palindrome-partitioning-ii/Maximum Profit in Job SchedulingDP + Binary Searchhttps://leetcode.com/problems/maximum-profit-in-job-scheduling/Distinct Subsequences2D DPhttps://leetcode.com/problems/distinct-subsequences/Cherry Pickup3D DPhttps://leetcode.com/problems/cherry-pickup/Real-World Use Cases of Dynamic ProgrammingDP is not just for coding interviews. It is deeply embedded in the technology you use every day.1. Google Maps & Navigation (Shortest Path) The routing engines behind GPS apps use DP-based algorithms like Dijkstra and Bellman-Ford to find the shortest or fastest path between two points across millions of nodes.2. Spell Checkers & Autocorrect (Edit Distance) When your phone corrects "teh" to "the," it is computing Edit Distance — a classic DP problem — between what you typed and every word in the dictionary.3. DNA Sequence Alignment (Bioinformatics) Researchers use the Needleman-Wunsch and Smith-Waterman algorithms — both DP — to align DNA and protein sequences and find similarities between species or identify mutations.4. Video Compression (MPEG, H.264) Modern video codecs use DP to determine the most efficient way to encode video frames, deciding which frames to store as full images and which to store as differences from the previous frame.5. Financial Portfolio Optimization Investment algorithms use DP to find the optimal allocation of assets under risk constraints — essentially a variant of the knapsack problem.6. Natural Language Processing (NLP) The Viterbi algorithm — used in speech recognition, part-of-speech tagging, and machine translation — is a DP algorithm. Every time Siri or Google Assistant understands your sentence, DP played a role.7. Game AI (Chess, Checkers) Game trees and minimax algorithms with memoization use DP to evaluate board positions and find the best move without recomputing already-seen positions.8. Compiler Optimization Compilers use DP to decide the optimal order of operations and instruction scheduling to generate the most efficient machine code.9. Text Justification (Word Processors) Microsoft Word and LaTeX use DP to optimally break paragraphs into lines — minimizing raggedness and maximizing visual appeal.10. Resource Scheduling in Cloud Computing AWS, Google Cloud, and Azure use DP-based scheduling to assign computational tasks to servers in the most cost-efficient way possible.Time Complexity Analysis of Common DP ProblemsUnderstanding the time complexity of DP is critical for interviews and for building scalable systems.ProblemTime ComplexitySpace ComplexityNotesFibonacci (naive recursion)O(2ⁿ)O(n)Exponential — terribleFibonacci (DP)O(n)O(1) with optimizationLinear — excellentLongest Common SubsequenceO(m × n)O(m × n)m, n = lengths of two stringsEdit DistanceO(m × n)O(m × n)Can optimize space to O(n)0/1 KnapsackO(n × W)O(n × W)n = items, W = capacityCoin ChangeO(n × amount)O(amount)Classic tabulationLongest Increasing SubsequenceO(n²) or O(n log n)O(n)Binary search version is fasterMatrix Chain MultiplicationO(n³)O(n²)Interval DPTravelling Salesman (bitmask)O(2ⁿ × n²)O(2ⁿ × n)Still exponential but manageable for small nThe general rule: DP trades time for space. You use memory to avoid recomputation. The time complexity equals the number of unique states multiplied by the work done per state.How to Learn and Master Dynamic Programming — Step by StepHere is an honest, structured path to mastery:Step 1 — Get recursion absolutely solid first. DP is memoized recursion at its core. If you cannot write clean recursive solutions confidently, DP will remain confusing. Practice at least 20 pure recursion problems first.Step 2 — Start with the classics. Fibonacci → Climbing Stairs → House Robber → Coin Change. These teach you the core pattern of defining state and transition without overwhelming you.Step 3 — Learn to define state explicitly. Before writing any code, ask: "What does dp[i] represent?" Write it in plain English. "dp[i] = the minimum cost to reach step i." This single habit separates good DP thinkers from struggling ones.Step 4 — Write the recurrence relation before coding. On paper or in a comment. Example: dp[i] = min(dp[i-1] + cost[i-1], dp[i-2] + cost[i-2]). If you can write the recurrence, the code writes itself.Step 5 — Master one pattern at a time. Don't jump between knapsack and interval DP in the same week. Spend a few days on each pattern until it feels intuitive.Step 6 — Solve the same problem both ways. Top-down and bottom-up. This builds deep understanding of what DP is actually doing.Step 7 — Optimize space after getting correctness. Many 2D DP solutions can use a single row instead of a full matrix. Learn this optimization after you understand the full solution.Step 8 — Do timed practice under interview conditions. Give yourself 35 minutes per problem. Review what you got wrong. DP is a muscle — it builds with reps.Common Mistakes in Dynamic Programming (And How to Avoid Them)Mistake 1 — Jumping to code before defining state. The most common DP error. Always define what dp[i] or dp[i][j] means before writing a single line of code.Mistake 2 — Wrong base cases. A single wrong base case corrupts every answer built on top of it. Trace through your base cases manually on a tiny example before running code.Mistake 3 — Off-by-one errors in indexing. Whether your dp array is 0-indexed or 1-indexed must be 100% consistent throughout. This causes more bugs in DP than almost anything else.Mistake 4 — Confusing top-down with bottom-up state order. In bottom-up DP, you must ensure that when you compute dp[i], all values it depends on are already filled. If you compute in the wrong order, you get garbage answers.Mistake 5 — Memoizing in the wrong dimension. In 2D problems, some people cache only one dimension when the state actually requires two. Always identify all variables that affect the outcome.Mistake 6 — Using global mutable state in recursion. If you use a shared array and don't clear it between test cases, you'll get wrong answers on subsequent inputs. Always scope your cache correctly.Mistake 7 — Not considering the full state space. In problems like Knapsack, forgetting that the state is (item index, remaining capacity) — not just item index — leads to fundamentally wrong solutions.Mistake 8 — Giving up after not recognizing the pattern immediately. DP problems don't announce themselves. The skill is learning to ask "is there overlapping subproblems here?" on every problem. This takes time. Don't mistake unfamiliarity for inability.Frequently Asked Questions About Dynamic ProgrammingQ: Is Dynamic Programming the same as recursion? Not exactly. Recursion is a technique for breaking problems into smaller pieces. DP is recursion plus memoization — or iterative tabulation. All DP can be written recursively, but not all recursion is DP.Q: What is the difference between DP and Divide and Conquer? Divide and Conquer (like Merge Sort) breaks problems into non-overlapping subproblems. DP is used when subproblems overlap — meaning the same subproblem is solved multiple times in a naive approach.Q: How do I know when NOT to use DP? If the subproblems don't overlap (no repeated computation), greedy or divide-and-conquer may be better. If the problem has no optimal substructure, DP won't give a correct answer.Q: Do I need to memorize DP solutions for interviews? No. You need to recognize patterns and be able to derive the recurrence relation. Memorizing solutions without understanding them will fail you in interviews. Focus on the thinking process.Q: How long does it take to get good at DP? Most people start to feel genuinely comfortable after solving 40–60 varied DP problems with deliberate practice. The first 10 feel impossible. The next 20 feel hard. After 50, patterns start feeling obvious.Q: What programming language is best for DP? Any language works. Python is often used for learning because its dictionaries make memoization trivial. C++ is preferred in competitive programming for its speed. For interviews, use whatever language you're most comfortable in.Q: What is space optimization in DP? Many DP problems only look back one or two rows to compute the current row. In those cases, you can replace an n×m table with just two arrays (or even one), reducing space complexity from O(n×m) to O(m). This is called space optimization or rolling array technique.Q: Can DP be applied to graph problems? Absolutely. Shortest path algorithms like Bellman-Ford are DP. Longest path in a DAG is DP. DP on trees is a rich subfield. Anywhere you have states and transitions, DP can potentially apply.Q: Is Greedy a type of Dynamic Programming? Greedy is related but distinct. Greedy makes locally optimal choices without reconsidering. DP considers all choices and picks the globally optimal one. Some DP solutions reduce to greedy when the structure allows, but they are different techniques.Q: What resources should I use to learn DP? For structured learning: Neetcode.io (organized problem list), Striver's DP Series on YouTube, and the book "Introduction to Algorithms" (CLRS) for theoretical depth. For practice: LeetCode's Dynamic Programming study plan and Codeforces for competitive DP.Final Thoughts — Dynamic Programming Is a SuperpowerDynamic Programming is genuinely one of the most powerful ideas in computer science. It shows up in your GPS, your autocorrect, your streaming video, your bank's risk models, and the AI assistants you talk to daily.The path to mastering it is not memorization. It is developing the habit of asking: can I break this into smaller problems that overlap? And then learning to define state clearly, write the recurrence, and trust the process.Start with Climbing Stairs. Write dp[i] in plain English before every problem. Solve everything twice — top-down and bottom-up. Do 50 problems with genuine reflection, not just accepted solutions.The click moment will come. And when it does, you'll wonder why it ever felt hard.

Dynamic ProgrammingMemoizationTabulationJavaOrigin StoryRichard Bellman
Recursion in Java - Complete Guide With Examples and Practice Problems

Recursion in Java - Complete Guide With Examples and Practice Problems

IntroductionIf there is one topic in programming that confuses beginners more than anything else, it is recursion. Most people read the definition, nod their head, and then immediately freeze when they have to write recursive code themselves.The problem is not that recursion is genuinely hard. The problem is that most explanations start with code before building the right mental model. Once you have the right mental model, recursion clicks permanently and you start seeing it everywhere — in tree problems, graph problems, backtracking, dynamic programming, divide and conquer, and more.This guide covers everything from the ground up. What recursion is, how the call stack works, how to identify base cases and recursive cases, every type of recursion, common patterns, time and space complexity analysis, the most common mistakes, and the top LeetCode problems to practice.By the end of this article, recursion will not feel like magic anymore. It will feel like a natural tool you reach for confidently.What Is Recursion?Recursion is when a function calls itself to solve a smaller version of the same problem.That is the complete definition. But let us make it concrete.Imagine you want to count down from 5 to 1. One way is a loop. Another way is — print 5, then solve the exact same problem for counting down from 4 to 1. Then print 4, solve for 3. And so on until you reach the base — there is nothing left to count down.void countDown(int n) { if (n == 0) return; // stop here System.out.println(n); countDown(n - 1); // solve the smaller version}The function countDown calls itself with a smaller input each time. Eventually it reaches 0 and stops. That stopping condition is the most important part of any recursive function — the base case.The Two Parts Every Recursive Function Must HaveEvery correctly written recursive function has exactly two parts. Without both, the function either gives wrong answers or runs forever.Part 1: Base CaseThe base case is the condition under which the function stops calling itself and returns a direct answer. It is the smallest version of the problem that you can solve without any further recursion.Without a base case, recursion never stops and you get a StackOverflowError — Java's way of telling you the call stack ran out of memory.Part 2: Recursive CaseThe recursive case is where the function calls itself with a smaller or simpler input — moving closer to the base case with each call. If your recursive case does not make the problem smaller, you have an infinite loop.Think of it like a staircase. The base case is the ground floor. The recursive case is each step going down. Every step must genuinely bring you one level closer to the ground.How Recursion Works — The Call StackThis is the mental model that most explanations skip, and it is the reason recursion confuses people.Every time a function is called in Java, a new stack frame is created and pushed onto the call stack. This frame stores the function's local variables, parameters, and where to return to when the function finishes.When a recursive function calls itself, a new frame is pushed on top. When that call finishes, its frame is popped and execution returns to the previous frame.Let us trace countDown(3) through the call stack:countDown(3) called → frame pushed prints 3 calls countDown(2) → frame pushed prints 2 calls countDown(1) → frame pushed prints 1 calls countDown(0) → frame pushed n == 0, return → frame popped back in countDown(1), return → frame popped back in countDown(2), return → frame popped back in countDown(3), return → frame poppedOutput: 3, 2, 1The call stack grows as calls go deeper, then shrinks as calls return. This is why recursion uses O(n) space for n levels deep — each level occupies one stack frame in memory.Your First Real Recursive Function — FactorialFactorial is the classic first recursion example. n! = n × (n-1) × (n-2) × ... × 1Notice the pattern — n! = n × (n-1)!. The factorial of n is n times the factorial of n-1. That recursive structure makes it perfect for recursion.public int factorial(int n) { // base case if (n == 0 || n == 1) return 1; // recursive case return n * factorial(n - 1);}Dry Run — factorial(4)factorial(4)= 4 * factorial(3)= 4 * 3 * factorial(2)= 4 * 3 * 2 * factorial(1)= 4 * 3 * 2 * 1= 24The call stack builds up going in, then multiplications happen coming back out. This "coming back out" phase is called the return phase or unwinding of the stack.Time Complexity: O(n) — n recursive calls Space Complexity: O(n) — n frames on the call stackThe Two Phases of RecursionEvery recursive function has two phases and understanding both is critical.Phase 1: The Call Phase (Going In)This happens as the function keeps calling itself with smaller inputs. Things you do before the recursive call happen in this phase — in order from the outermost call to the innermost.Phase 2: The Return Phase (Coming Back Out)This happens as each call finishes and returns to its caller. Things you do after the recursive call happen in this phase — in reverse order, from the innermost call back to the outermost.This distinction explains why the output order can be surprising:void printBothPhases(int n) { if (n == 0) return; System.out.println("Going in: " + n); // call phase printBothPhases(n - 1); System.out.println("Coming out: " + n); // return phase}For printBothPhases(3):Going in: 3Going in: 2Going in: 1Coming out: 1Coming out: 2Coming out: 3This two-phase understanding is what makes problems like reversing a string or printing a linked list backwards via recursion feel natural.Types of RecursionRecursion is not one-size-fits-all. There are several distinct types and knowing which type applies to a problem shapes how you write the solution.1. Direct RecursionThe function calls itself directly. This is the most common type — what we have seen so far.void direct(int n) { if (n == 0) return; direct(n - 1); // calls itself}2. Indirect RecursionFunction A calls Function B which calls Function A. They form a cycle.void funcA(int n) { if (n <= 0) return; System.out.println("A: " + n); funcB(n - 1);}void funcB(int n) { if (n <= 0) return; System.out.println("B: " + n); funcA(n - 1);}Used in: state machines, mutual recursion in parsers, certain mathematical sequences.3. Tail RecursionThe recursive call is the last operation in the function. Nothing happens after the recursive call returns — no multiplication, no addition, nothing.// NOT tail recursive — multiplication happens after returnint factorial(int n) { if (n == 1) return 1; return n * factorial(n - 1); // multiply after return — not tail}// Tail recursive — recursive call is the last thingint factorialTail(int n, int accumulator) { if (n == 1) return accumulator; return factorialTail(n - 1, n * accumulator); // last operation}Why does tail recursion matter? In languages that support tail call optimization (like Scala, Kotlin, and many functional languages), tail recursive functions can be converted to iteration internally — no stack frame accumulation, O(1) space. Java does NOT perform tail call optimization, but understanding tail recursion is still important for interviews and functional programming concepts.4. Head RecursionThe recursive call happens first, before any other processing. All processing happens in the return phase.void headRecursion(int n) { if (n == 0) return; headRecursion(n - 1); // call first System.out.println(n); // process after}// Output: 1 2 3 4 5 (processes in reverse order of calls)5. Tree RecursionThe function makes more than one recursive call per invocation. This creates a tree of calls rather than a linear chain. Fibonacci is the classic example.int fibonacci(int n) { if (n <= 1) return n; return fibonacci(n - 1) + fibonacci(n - 2); // TWO recursive calls}The call tree for fibonacci(4): fib(4) / \ fib(3) fib(2) / \ / \ fib(2) fib(1) fib(1) fib(0) / \ fib(1) fib(0)Time Complexity: O(2ⁿ) — exponential! Each call spawns two more. Space Complexity: O(n) — maximum depth of the call treeThis is why memoization (caching results) is so important for tree recursion — it converts O(2ⁿ) to O(n) by never recomputing the same subproblem twice.6. Mutual RecursionA specific form of indirect recursion where two functions call each other alternately to solve a problem. Different from indirect recursion in that the mutual calls are the core mechanism of the solution.// Check if a number is even or odd using mutual recursionboolean isEven(int n) { if (n == 0) return true; return isOdd(n - 1);}boolean isOdd(int n) { if (n == 0) return false; return isEven(n - 1);}Common Recursion Patterns in DSAThese are the patterns you will see over and over in interview problems. Recognizing them is more important than memorizing solutions.Pattern 1: Linear Recursion (Do Something, Recurse on Rest)Process the current element, then recurse on the remaining problem.// Sum of arrayint arraySum(int[] arr, int index) { if (index == arr.length) return 0; // base case return arr[index] + arraySum(arr, index + 1); // current + rest}Pattern 2: Divide and Conquer (Split Into Two Halves)Split the problem into two halves, solve each recursively, combine results.// Merge Sortvoid mergeSort(int[] arr, int left, int right) { if (left >= right) return; // base case — single element int mid = (left + right) / 2; mergeSort(arr, left, mid); // sort left half mergeSort(arr, mid + 1, right); // sort right half merge(arr, left, mid, right); // combine}Pattern 3: Backtracking (Try, Recurse, Undo)Try a choice, recurse to explore it, undo the choice when backtracking.// Generate all subsetsvoid subsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { if (index == nums.length) { result.add(new ArrayList<>(current)); return; } // Choice 1: include nums[index] current.add(nums[index]); subsets(nums, index + 1, current, result); current.remove(current.size() - 1); // undo // Choice 2: exclude nums[index] subsets(nums, index + 1, current, result);}Pattern 4: Tree Recursion (Left, Right, Combine)Recurse on left subtree, recurse on right subtree, combine or process results.// Height of binary treeint height(TreeNode root) { if (root == null) return 0; // base case int leftHeight = height(root.left); // solve left int rightHeight = height(root.right); // solve right return 1 + Math.max(leftHeight, rightHeight); // combine}Pattern 5: Memoization (Cache Recursive Results)Store results of recursive calls so the same subproblem is never solved twice.Map<Integer, Integer> memo = new HashMap<>();int fibonacci(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); // return cached int result = fibonacci(n - 1) + fibonacci(n - 2); memo.put(n, result); // cache before returning return result;}This converts Fibonacci from O(2ⁿ) to O(n) time with O(n) space — a massive improvement.Recursion vs Iteration — When to Use WhichThis is one of the most common interview questions about recursion. Here is a clear breakdown:Use Recursion when:The problem has a naturally recursive structure (trees, graphs, divide and conquer)The solution is significantly cleaner and easier to understand recursivelyThe problem involves exploring multiple paths or choices (backtracking)The depth of recursion is manageable (not too deep to cause stack overflow)Use Iteration when:The problem is linear and a loop is equally clearMemory is a concern (iteration uses O(1) stack space vs O(n) for recursion)Performance is critical and function call overhead mattersJava's stack size limit could be hit (default around 500-1000 frames for deep recursion)The key rule: Every recursive solution can be converted to an iterative one (usually using an explicit stack). But recursive solutions for tree and graph problems are almost always cleaner to write and understand.Time and Space Complexity of Recursive FunctionsAnalyzing complexity for recursive functions requires a specific approach.The Recurrence Relation MethodExpress the time complexity as a recurrence relation and solve it.Factorial:T(n) = T(n-1) + O(1) = T(n-2) + O(1) + O(1) = T(1) + n×O(1) = O(n)Fibonacci (naive):T(n) = T(n-1) + T(n-2) + O(1) ≈ 2×T(n-1) = O(2ⁿ)Binary Search:T(n) = T(n/2) + O(1) = O(log n) [by Master Theorem]Merge Sort:T(n) = 2×T(n/2) + O(n) = O(n log n) [by Master Theorem]Space Complexity Rule for RecursionSpace complexity of a recursive function = maximum depth of the call stack × space per frameLinear recursion (factorial, sum): O(n) spaceBinary recursion (Fibonacci naive): O(n) space (maximum depth, not number of nodes)Divide and conquer (merge sort): O(log n) space (depth of recursion tree)Memoized Fibonacci: O(n) space (memo table + call stack)Classic Recursive Problems With SolutionsProblem 1: Reverse a StringString reverse(String s) { if (s.length() <= 1) return s; // base case // last char + reverse of everything before last char return s.charAt(s.length() - 1) + reverse(s.substring(0, s.length() - 1));}Dry run for "hello":reverse("hello") = 'o' + reverse("hell")reverse("hell") = 'l' + reverse("hel")reverse("hel") = 'l' + reverse("he")reverse("he") = 'e' + reverse("h")reverse("h") = "h"Unwinding: "h" → "he" → "leh" → "lleh" → "olleh" ✅Problem 2: Power Function (x^n)double power(double x, int n) { if (n == 0) return 1; // base case if (n < 0) return 1.0 / power(x, -n); // handle negative if (n % 2 == 0) { double half = power(x, n / 2); return half * half; // x^n = (x^(n/2))^2 } else { return x * power(x, n - 1); }}This is the fast power algorithm — O(log n) time instead of O(n).Problem 3: Fibonacci With Memoizationint[] memo = new int[100];Arrays.fill(memo, -1);int fib(int n) { if (n <= 1) return n; if (memo[n] != -1) return memo[n]; memo[n] = fib(n - 1) + fib(n - 2); return memo[n];}Time: O(n) — each value computed once Space: O(n) — memo array + call stackProblem 4: Tower of HanoiThe classic recursion teaching problem. Move n disks from source to destination using a helper rod.void hanoi(int n, char source, char destination, char helper) { if (n == 1) { System.out.println("Move disk 1 from " + source + " to " + destination); return; } // Move n-1 disks from source to helper hanoi(n - 1, source, helper, destination); // Move the largest disk from source to destination System.out.println("Move disk " + n + " from " + source + " to " + destination); // Move n-1 disks from helper to destination hanoi(n - 1, helper, destination, source);}Time Complexity: O(2ⁿ) — minimum moves required is 2ⁿ - 1 Space Complexity: O(n) — call stack depthProblem 5: Generate All Subsets (Power Set)void generateSubsets(int[] nums, int index, List<Integer> current, List<List<Integer>> result) { result.add(new ArrayList<>(current)); // add current subset for (int i = index; i < nums.length; i++) { current.add(nums[i]); // include generateSubsets(nums, i + 1, current, result); // recurse current.remove(current.size() - 1); // exclude (backtrack) }}For [1, 2, 3] — generates all 8 subsets: [], [1], [1,2], [1,2,3], [1,3], [2], [2,3], [3]Time: O(2ⁿ) — 2ⁿ subsets Space: O(n) — recursion depthProblem 6: Binary Search Recursivelyint binarySearch(int[] arr, int target, int left, int right) { if (left > right) return -1; // base case — not found int mid = left + (right - left) / 2; if (arr[mid] == target) return mid; else if (arr[mid] < target) return binarySearch(arr, target, mid + 1, right); else return binarySearch(arr, target, left, mid - 1);}Time: O(log n) — halving the search space each time Space: O(log n) — log n frames on the call stackRecursion on Trees — The Natural HabitatTrees are where recursion truly shines. Every tree problem becomes elegant with recursion because a tree is itself a recursive structure — each node's left and right children are trees themselves.// Maximum depth of binary treeint maxDepth(TreeNode root) { if (root == null) return 0; return 1 + Math.max(maxDepth(root.left), maxDepth(root.right));}// Check if tree is symmetricboolean isSymmetric(TreeNode left, TreeNode right) { if (left == null && right == null) return true; if (left == null || right == null) return false; return left.val == right.val && isSymmetric(left.left, right.right) && isSymmetric(left.right, right.left);}// Path sum — does any root-to-leaf path sum to target?boolean hasPathSum(TreeNode root, int target) { if (root == null) return false; if (root.left == null && root.right == null) return root.val == target; return hasPathSum(root.left, target - root.val) || hasPathSum(root.right, target - root.val);}Notice the pattern in all three — base case handles null, recursive case handles left and right subtrees, result combines both.How to Think About Any Recursive Problem — Step by StepThis is the framework you should apply to every new recursive problem you encounter:Step 1 — Identify the base case What is the smallest input where you know the answer directly without any recursion? For arrays it is usually empty array or single element. For trees it is null node. For numbers it is 0 or 1.Step 2 — Trust the recursive call Assume your function already works correctly for smaller inputs. Do not trace through the entire recursion mentally — just trust it. This is the Leap of Faith and it is what makes recursion feel natural.Step 3 — Express the current problem in terms of smaller problems How does the answer for size n relate to the answer for size n-1 (or n/2, or subtrees)? This relationship is your recursive case.Step 4 — Make sure each call moves toward the base case The input must become strictly smaller with each call. If it does not, you have infinite recursion.Step 5 — Write the base case first, then the recursive case Always. Writing the recursive case first leads to bugs because you have not defined when to stop.Common Mistakes and How to Avoid ThemMistake 1: Missing or wrong base case The most common mistake. Missing the base case causes StackOverflowError. Wrong base case causes wrong answers.Always ask — what is the simplest possible input, and what should the function return for it? Write that case first.Mistake 2: Not moving toward the base case If you call factorial(n) inside factorial(n) without reducing n, you loop forever. Every recursive call must make the problem strictly smaller.Mistake 3: Trusting your brain to trace deep recursion Do not try to trace 10 levels of recursion in your head. Trust the recursive call, verify the base case, and check that each call reduces the problem. That is all you need.Mistake 4: Forgetting to return the recursive result// WRONG — result is computed but not returnedint sum(int n) { if (n == 0) return 0; sum(n - 1) + n; // computed but discarded!}// CORRECTint sum(int n) { if (n == 0) return 0; return sum(n - 1) + n;}Mistake 5: Modifying shared state without backtracking In backtracking problems, if you add something to a list before a recursive call, you must remove it after the call returns. Forgetting to backtrack leads to incorrect results and is one of the trickiest bugs to find.Mistake 6: Recomputing the same subproblems Naive Fibonacci computes fib(3) multiple times when computing fib(5). Use memoization whenever you notice overlapping subproblems in your recursion tree.Top LeetCode Problems on RecursionThese are organized by pattern — work through them in this order for maximum learning:Pure Recursion Basics:509. Fibonacci Number — Easy — start here, implement with and without memoization344. Reverse String — Easy — recursion on arrays206. Reverse Linked List — Easy — recursion on linked list50. Pow(x, n) — Medium — fast power with recursionTree Recursion (Most Important):104. Maximum Depth of Binary Tree — Easy — simplest tree recursion112. Path Sum — Easy — decision recursion on tree101. Symmetric Tree — Easy — mutual recursion on tree110. Balanced Binary Tree — Easy — bottom-up recursion236. Lowest Common Ancestor of a Binary Tree — Medium — classic tree recursion124. Binary Tree Maximum Path Sum — Hard — advanced tree recursionDivide and Conquer:148. Sort List — Medium — merge sort on linked list240. Search a 2D Matrix II — Medium — divide and conquerBacktracking:78. Subsets — Medium — generate all subsets46. Permutations — Medium — generate all permutations77. Combinations — Medium — generate combinations79. Word Search — Medium — backtracking on grid51. N-Queens — Hard — classic backtrackingMemoization / Dynamic Programming:70. Climbing Stairs — Easy — Fibonacci variant with memoization322. Coin Change — Medium — recursion with memoization to DP139. Word Break — Medium — memoized recursionRecursion Cheat Sheet// Linear recursion templatereturnType solve(input) { if (baseCase) return directAnswer; // process current return solve(smallerInput);}// Tree recursion templatereturnType solve(TreeNode root) { if (root == null) return baseValue; returnType left = solve(root.left); returnType right = solve(root.right); return combine(left, right, root.val);}// Backtracking templatevoid backtrack(choices, current, result) { if (goalReached) { result.add(copy of current); return; } for (choice : choices) { make(choice); // add to current backtrack(...); // recurse undo(choice); // remove from current }}// Memoization templateMap<Input, Output> memo = new HashMap<>();returnType solve(input) { if (baseCase) return directAnswer; if (memo.containsKey(input)) return memo.get(input); returnType result = solve(smallerInput); memo.put(input, result); return result;}FAQs — People Also AskQ1. What is recursion in Java with a simple example? Recursion is when a function calls itself to solve a smaller version of the same problem. A simple example is factorial — factorial(5) = 5 × factorial(4) = 5 × 4 × factorial(3) and so on until factorial(1) returns 1 directly.Q2. What is the difference between recursion and iteration? Iteration uses loops (for, while) and runs in O(1) space. Recursion uses function calls and uses O(n) stack space for n levels deep. Recursion is often cleaner for tree and graph problems. Iteration is better when memory is a concern or the problem is inherently linear.Q3. What causes StackOverflowError in Java recursion? StackOverflowError happens when recursion goes too deep — too many frames accumulate on the call stack before any of them return. This is caused by missing base case, wrong base case, or input too large for Java's default stack size limit.Q4. What is the difference between recursion and dynamic programming? Recursion solves a problem by breaking it into subproblems. Dynamic programming is recursion plus memoization — storing results of subproblems so they are never computed twice. DP converts exponential recursive solutions into polynomial ones by eliminating redundant computation.Q5. What is tail recursion and does Java support tail call optimization? Tail recursion is when the recursive call is the absolute last operation in the function. Java does NOT support tail call optimization — Java always creates a new stack frame for each call even if it is tail recursive. Languages like Scala and Kotlin (on the JVM) do support it with the tailrec keyword.Q6. How do you convert recursion to iteration? Every recursive solution can be converted to iterative using an explicit stack data structure. The call stack's behavior is replicated manually — push the initial call, loop while stack is not empty, pop, process, and push sub-calls. Tree traversals are a common example of this conversion.ConclusionRecursion is not magic. It is a systematic way of solving problems by expressing them in terms of smaller versions of themselves. Once you internalize the two parts (base case and recursive case), understand the call stack mentally, and learn to trust the recursive call rather than trace it completely, everything clicks.The learning path from here is clear — start with simple problems like Fibonacci and array sum. Move to tree problems where recursion is most natural. Then tackle backtracking. Finally add memoization to bridge into dynamic programming.Every hour you spend understanding recursion deeply pays dividends across the entire rest of your DSA journey. Trees, graphs, divide and conquer, backtracking, dynamic programming — all of them build on this foundation.

RecursionJavaBase CaseCall StackBacktrackingDynamic Programming
LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

LeetCode 784 Letter Case Permutation | Recursion & Backtracking Java Solution

IntroductionThe Letter Case Permutation problem is a classic example of recursion and backtracking, often asked in coding interviews and frequently searched by learners preparing for platforms like LeetCode.This problem helps in understanding:Decision-making at each stepRecursive branchingString manipulationIn this article, we’ll break down the intuition, visualize the decision process using your decision tree, and implement an efficient Java solution.🔗 Problem LinkLeetCode: Letter Case PermutationProblem StatementGiven a string s, you can transform each alphabet character into:LowercaseUppercaseDigits remain unchanged.👉 Return all possible strings formed by these transformations.ExamplesExample 1Input:s = "a1b2"Output:["a1b2","a1B2","A1b2","A1B2"]Example 2Input:s = "3z4"Output:["3z4","3Z4"]Key InsightAt each character:If it's a digit → only one choiceIf it's a letter → two choices:lowercase OR uppercaseSo total combinations:2^(number of letters)Intuition (Using Your Decision Tree)For input: "a1b2"Start from index 0: "" / \ "a" "A" | | "a1" "A1" / \ / \ "a1b" "a1B" "A1b" "A1B" | | | | "a1b2" "a1B2" "A1b2" "A1B2"Understanding the TreeAt 'a' → branch into 'a' and 'A''1' → no branching (digit)'b' → again branching'2' → no branching📌 Leaf nodes = final answersApproach: Recursion + BacktrackingIdeaTraverse the string character by characterIf digit → move forwardIf letter → branch into:lowercaseuppercaseJava Codeimport java.util.*;class Solution { // List to store all results List<String> lis = new ArrayList<>(); public void solve(String s, int ind, String ans) { // Base case: reached end of string if (ind == s.length()) { lis.add(ans); // store generated string return; } char ch = s.charAt(ind); // If character is a digit → only one option if (ch >= '0' && ch <= '9') { solve(s, ind + 1, ans + ch); } else { // Choice 1: convert to lowercase solve(s, ind + 1, ans + Character.toLowerCase(ch)); // Choice 2: convert to uppercase solve(s, ind + 1, ans + Character.toUpperCase(ch)); } } public List<String> letterCasePermutation(String s) { solve(s, 0, ""); // start recursion return lis; }}Step-by-Step ExecutionFor "a1b2":Start → ""'a' → "a", "A"'1' → "a1", "A1"'b' → "a1b", "a1B", "A1b", "A1B"'2' → final stringsComplexity AnalysisTime Complexity: O(2^n)(n = number of letters)Space Complexity: O(2^n)(for storing results)Why This Approach WorksRecursion explores all possibilitiesEach letter creates a branching pointDigits pass through unchangedBacktracking ensures all combinations are generatedKey TakeawaysThis is a binary decision recursion problemLetters → 2 choicesDigits → 1 choiceDecision tree directly maps to recursionPattern similar to:SubsetsPermutations with conditionsWhen This Problem Is AskedCommon in:Coding interviewsRecursion/backtracking roundsString manipulation problemsConclusionThe Letter Case Permutation problem is a perfect example of how recursion can be used to explore all possible combinations efficiently.Once the decision tree is clear, the implementation becomes straightforward. This pattern is widely used in many advanced problems, making it essential to master.Frequently Asked Questions (FAQs)1. Why don’t digits create branches?Because they have only one valid form.2. What is the main concept used?Recursion with decision-making (backtracking).3. Can this be solved iteratively?Yes, using BFS or iterative expansion, but recursion is more intuitive.

LeetCodeMediumJavaRecursion
Permutation with Spaces Explained Using Recursion & Decision Tree | Java Solution GFG

Permutation with Spaces Explained Using Recursion & Decision Tree | Java Solution GFG

IntroductionThe Permutation with Spaces problem is a classic recursion question that helps build a strong understanding of decision-making and backtracking patterns.Instead of generating permutations by rearranging characters, this problem focuses on inserting spaces between characters in all possible ways.What makes this problem powerful is its decision tree structure, which you’ve already visualized perfectly. In this article, we will directly connect that intuition with code.Link of Problem: GeeksforGeeks – Permutation with SpacesProblem StatementGiven a string s, generate all possible strings by placing:Either a spaceOr no spacebetween every pair of characters.Return all results in sorted order.ExampleInput:s = "ABC"Output:A B CA BCAB CABCUnderstanding Your Decision Tree (Very Important)Two Choices at Each Step:❌ Do NOT add space before the character✔️ Add space before the characterMapping TreeFrom diagram:At B:"AB" → no space"A B" → spaceAt C:From "AB":"ABC""AB C"From "A B":"A BC""A B C"Final Output (Leaf Nodes)As shown in your diagram:ABC, AB C, A BC, A B C📌 This is exactly what recursion generates.Key InsightAt every index (except first), we have:2 choices → space OR no spaceSo total combinations:2^(n-1)Approach: Recursion + Decision MakingIdeaFix the first characterFor every next character:Add space + characterAdd character directlyContinue recursivelyJava Code with Detailed Commentsimport java.util.*;class Solution { // List to store all results ArrayList<String> lis = new ArrayList<>(); void solve(String s, int ind, String curr) { // Base case: // If index reaches end of string, // we have formed one valid permutation if (ind == s.length()) { lis.add(curr); // store the result return; } // Choice 1: Add SPACE before current character // Example: "A" → "A B" solve(s, ind + 1, curr + " " + s.charAt(ind)); // Choice 2: Do NOT add space // Example: "A" → "AB" solve(s, ind + 1, curr + s.charAt(ind)); } ArrayList<String> permutation(String s) { // Start with first character (no space before it) String curr = "" + s.charAt(0); // Start recursion from index 1 solve(s, 1, curr); // Sort results as required in problem Collections.sort(lis); return lis; }}Step-by-Step Execution (Using Your Tree)For "ABC":Start → "A"At "B":"AB""A B"At "C":"ABC", "AB C""A BC", "A B C"Exactly matches your decision tree leaf nodes ✅Complexity AnalysisTime Complexity: O(2ⁿ)Space Complexity: O(2ⁿ)Why This Approach WorksRecursion explores every possible choiceEach level = one characterEach branch = decision (space / no space)Leaf nodes = final answersKey TakeawaysThis is a binary decision recursion problemAlways identify:ChoicesBase conditionYour decision tree = direct blueprint of recursionSame pattern applies to:SubsetsBinary choices problemsConclusionThe Permutation with Spaces problem becomes extremely simple once the decision tree is understood—and your diagram already captures that perfectly.The recursion directly follows the same structure:Every branch = one decisionEvery leaf = one answerMaster this pattern, and you’ll find many recursion problems much easier to solve.

MediumGeeksforGeeksRecursionJava
Ai Assistant Kas