6 drills · 6 pattern exercises · 13 micro drills
Patterns used in this topic, with difficulty breakdown.
Find two numbers in an unsorted list that add up to a target and return their indices.
All 6 drills grouped by pattern. Each includes prompt, solution, and complexity.
In plain English: Find two numbers in an unsorted list that add up to a target and return their indices.
Instead of checking all pairs O(n²), store each value in a hashmap — then lookup for the complement is O(1).
Prompt
Given unsorted array a and target t, return indices of the pair that sums to t.
Solution
def two_sum(a, t):
seen = {}
for i, x in enumerate(a):
# check complement before storing
if t - x in seen:
return [seen[t - x], i]
seen[x] = i
return []
# Test: two_sum([2,7,11,15], 9) == [0, 1]
In plain English: Given ice cream prices and a budget, find two flavors that cost exactly the budget.
Same as two-sum: for each price, check if budget minus price has already been seen.
Prompt
Given prices and a budget, find two items that sum to exactly the budget.
Solution
def ice_cream(prices, money):
seen = {}
for i, p in enumerate(prices):
complement = money - p
if complement in seen:
return [seen[complement] + 1, i + 1] # 1-indexed
seen[p] = i
return []
# Test: ice_cream([1,4,5,3,2], 4) == [1, 4]
In plain English: Find the first character in a string that appears only once.
Two passes: first count all frequencies, then scan left-to-right for the first count of 1 — the scan order ensures 'first'.
Prompt
Find the index of the first character that appears only once.
Solution
def first_unique(s):
# first pass: count, second pass: find
freq = {}
for c in s:
freq[c] = freq.get(c, 0) + 1
for i, c in enumerate(s):
if freq[c] == 1:
return i
return -1
# Test: first_unique("aabbcdd") == 4 (index of 'c')
In plain English: Group words that contain the exact same letters, like 'eat' and 'tea'.
Anagrams share the same sorted character sequence — use that as a hashmap key to bucket them together.
Prompt
Group a list of words by anagram.
Solution
from collections import defaultdict
def group_anagrams(words):
groups = defaultdict(list)
for w in words:
key = tuple(sorted(w)) # sorted chars = canonical key
groups[key].append(w)
return list(groups.values())
# Test: group_anagrams(["eat","tea","tan","ate","nat","bat"])
# => [["eat","tea","ate"], ["tan","nat"], ["bat"]]
In plain English: Count how many unique pairs of numbers in an array differ by exactly k.
A set gives O(1) membership testing — for each x, just check if x+k exists.
Prompt
Count unique pairs where |a-b| = k.
Solution
def count_pairs_diff(a, k):
s = set(a) # O(1) lookup for complement
count = 0
for x in s:
if x + k in s: # only check x+k, not x-k, to avoid double counting
count += 1
return count
# Test: count_pairs_diff([1,5,3,4,2], 2) == 3
In plain English: Count how many contiguous subarrays add up to exactly k.
If prefix[j] - prefix[i] = k, the subarray i..j sums to k — a hashmap of prefix counts lets you find matching i's in O(1).
Prompt
Count subarrays whose elements sum to exactly k.
Solution
def subarray_sum(a, k):
count = 0
prefix = 0
mp = {0: 1}
for x in a:
prefix += x
count += mp.get(prefix - k, 0) # how many earlier prefixes match?
mp[prefix] = mp.get(prefix, 0) + 1
return count
# Test: subarray_sum([1,1,1], 2) == 2
6 exercises: spot the signals, pick the method, avoid the trap.
Given unsorted array a and target t, return indices of the pair that sums to t.
Signals
For each element x, check if (target - x) is in the map. Insert x after checking. O(n).
Sort + two pointers is O(n log n) but loses original indices — need extra bookkeeping.
Trap
Brute force O(n²) checks all pairs. The hash map complement lookup is the key insight.
Find the index of the first character that appears only once.
Signals
Pass 1: count frequencies. Pass 2: scan left-to-right, return first count==1. O(n).
Trap
Nested loops checking each character against all others is O(n²). Counter reduces to O(n).
Group a list of words by anagram.
Signals
Key = sorted(word), value = list of words. All anagrams map to the same key. O(n * k log k).
Trap
Comparing all pairs is O(n² * k). Using a character count tuple as key avoids sorting: O(n * k).
Given prices and a budget, find two items that sum to exactly the budget.
Signals
Identical to two sum. For each price, check complement in hash map. O(n).
Sort prices + two pointers. O(n log n). Need to map back to original indices.
Trap
Recognize this as a two-sum variant — don't overthink the story framing.
Count unique pairs where |a-b| = k.
Signals
Put all elements in a set. For each x, check if x+k is in the set. O(n).
Sort + two pointers can also find all pairs with difference k in O(n log n).
Trap
Watch for k=0: need count of duplicates, not self-pairing. Use a Counter instead of a set.
Count subarrays whose elements sum to exactly k.
Signals
Running prefix sum + hash map counting. For each prefix[j], count of prefix[j]-k seen so far. O(n).
The hash map IS the core of the solution — combined with prefix sums.
Trap
Sliding window doesn't work here because elements can be negative. Prefix sum + hash map is needed.
13 quick recall drills to reinforce this topic.
Store what you've seen, check for the complement. Turns O(n^2) brute force into O(n). The fundamental hash map pattern.
seen = {}
for i, x in enumerate(a):
if target - x in seen:
return [seen[target - x], i]
seen[x] = i
Hash-map window tracks last index of each char. Avoids set removal on duplicates.
seen = {}
l = res = 0
for r, c in enumerate(s):
if c in seen and seen[c] >= l:
l = seen[c] + 1
seen[c] = r
res = max(res, r - l + 1)
Try placing each number into k buckets. Prune: skip if bucket overflows, skip symmetric bucket states.
def canPartitionKSubsets(nums, k):
total = sum(nums)
if total % k: return False
target = total // k
nums.sort(reverse=True)
buckets = [0] * k
def backtrack(i):
if i == len(nums): return all(b == target for b in buckets)
seen = set()
for j in range(k):
if buckets[j] + nums[i] > target: continue
if buckets[j] in seen: continue # prune symmetric
seen.add(buckets[j])
buckets[j] += nums[i]
if backtrack(i + 1): return True
buckets[j] -= nums[i]
return False
return backtrack(0)
Two-pass frequency count: first pass counts, second pass finds the target. Works for first unique, most common, etc.
freq = {}
for c in s:
freq[c] = freq.get(c, 0) + 1
for i, c in enumerate(s):
if freq[c] == 1:
return i
Character counting is the core technique for anagram detection and frequency-based problems.
from collections import Counter
freq = Counter(s)
Window is valid when (window_size - max_freq) <= k. max_freq only needs to increase — shrinking the window can't help.
count = {}
l = max_freq = 0
for r in range(len(s)):
count[s[r]] = count.get(s[r], 0) + 1
max_freq = max(max_freq, count[s[r]])
if (r - l + 1) - max_freq > k:
count[s[l]] -= 1
l += 1
defaultdict eliminates key-existence checks. Simplifies grouping and adjacency list construction.
from collections import defaultdict
d = defaultdict(list)
Grouping by computed key is the core of group-anagrams and bucket-sort approaches.
from collections import defaultdict
groups = defaultdict(list)
for item in items:
groups[key_fn(item)].append(item)
Adjacency list is the standard graph representation. First step for BFS/DFS problems.
from collections import defaultdict
graph = defaultdict(list)
# or: {i: [] for i in range(n)}
XOR marks differing bits, then Brian Kernighan's trick counts them in O(k) where k is the number of set bits.
xor = x ^ y
count = 0
while xor:
xor &= xor - 1 # clear lowest set bit
count += 1
Carry max-so-far down the path. A node is good if its value >= the path max.
def dfs(n, mx):
if not n: return 0
good = 1 if n.val >= mx else 0
mx = max(mx, n.val)
return good + dfs(n.left, mx) + dfs(n.right, mx)
Recursive node counting teaches divide-and-conquer tree thinking.
def count(node):
if not node: return 0
return 1 + count(node.left) + count(node.right)
Fixed-size sliding window computes running aggregates in O(n). Core optimization pattern.
mx = cur = sum(a[:k])
for i in range(k, len(a)):
cur += a[i] - a[i-k]
mx = max(mx, cur)