26 Init defaultdict 10s

defaultdict eliminates key-existence checks. Simplifies grouping and adjacency list construction.

from collections import defaultdict
d = defaultdict(list)
27 Counter from list 10s

Counter gives instant frequency maps. Powers anagram checks and top-k problems.

from collections import Counter
freq = Counter(a)
28 Get with default 5s

dict.get() avoids KeyError. Cleaner than try/except for missing-key scenarios.

d.get(k, 0)
29 Dict comprehension 10s

Dict comprehensions build lookup tables in one line. Used for index maps and inversions.

{k: v for k, v in items}
30 Check key exists 5s

Membership testing is O(1) in hash maps. Foundation for two-sum and duplicate detection.

if k in d:
31 Merge two dicts 5s

Dict merging combines configurations or frequency maps. Python 3.5+ unpacking syntax.

merged = {**d1, **d2}
32 Invert a dict 10s

Dict inversion swaps keys and values for reverse lookups. Useful in encoding problems.

inv = {v: k for k, v in d.items()}
33 Group by key 10s

Grouping by computed key is the core of group-anagrams and bucket-sort approaches.

from collections import defaultdict
groups = defaultdict(list)
for item in items:
    groups[key_fn(item)].append(item)
137 Clone linked list with random 15s

Two-pass hash map maps old nodes to new clones. Second pass wires next/random.

d = {None: None}
n = head
while n:
    d[n] = Node(n.val)
    n = n.next
n = head
while n:
    d[n].next = d[n.next]
    d[n].random = d[n.random]
    n = n.next
138 LRU cache get/put skeleton 15s

OrderedDict gives O(1) get/put with LRU eviction. Classic system design building block.

from collections import OrderedDict
class LRU:
    def __init__(self, cap):
        self.cap = cap
        self.d = OrderedDict()
    def get(self, k):
        if k not in self.d: return -1
        self.d.move_to_end(k)
        return self.d[k]
    def put(self, k, v):
        self.d[k] = v
        self.d.move_to_end(k)
        if len(self.d) > self.cap:
            self.d.popitem(last=False)
149 Partition labels (last occurrence) 10s

Map each char to its last index. Extend partition end greedily.

last = {c: i for i, c in enumerate(s)}
res = []
start = end = 0
for i, c in enumerate(s):
    end = max(end, last[c])
    if i == end:
        res.append(end - start + 1)
        start = i + 1
241 HashMap vs sorting for duplicates 15s

HashMap = O(n) time, O(n) space. Sorting = O(n log n) time, O(1) space. Choose based on constraints.

HASHMAP vs SORTING FOR DUPLICATES
─────────────────────────────────
HASHMAP (Counter/set):
  + O(n) time
  + Preserves original order
  + Online: can process stream
  - O(n) extra space
  - Hash collisions (rare, but exist)

SORTING:
  + O(1) extra space (if in-place sort allowed)
  + Duplicates are adjacent → easy to skip
  + Natural for "kth largest" type problems
  - O(n log n) time
  - Destroys original order

DECISION:
  "Contains duplicate?"     → set, O(n)
  "Count occurrences?"      → Counter, O(n)
  "Find all duplicates?"    → Counter or sort
  "K most frequent?"        → Counter + heap
  "Remove duplicates?"      → set (unordered) or sort (ordered)
  "Group anagrams?"         → sorted string as key → HashMap

BOTH WORK for: two sum (hash O(n) vs sort O(n log n))
ONLY HASH for: subarray sum (prefix sum + hash)
ONLY SORT for: merge intervals, meeting rooms
254 Complement map (two sum) 10s

Store what you've seen, check for the complement. Turns O(n^2) brute force into O(n). The fundamental hash map pattern.

seen = {}
for i, x in enumerate(a):
    if target - x in seen:
        return [seen[target - x], i]
    seen[x] = i
255 Frequency count (first unique) 10s

Two-pass frequency count: first pass counts, second pass finds the target. Works for first unique, most common, etc.

freq = {}
for c in s:
    freq[c] = freq.get(c, 0) + 1
for i, c in enumerate(s):
    if freq[c] == 1:
        return i
256 Trie insert and search 15s

Trie insert walks/creates nodes per character, marks end. Search is identical but checks existence instead of creating.

class TrieNode:
    def __init__(self):
        self.children = {}
        self.is_end = False

def insert(root, word):
    node = root
    for ch in word:
        if ch not in node.children:
            node.children[ch] = TrieNode()
        node = node.children[ch]
    node.is_end = True
257 Trie search with wildcard DFS 15s

Wildcard '.' branches to all children. Without wildcards it's a straight walk — the DFS only fires on '.' characters.

def search(node, word, i=0):
    if i == len(word):
        return node.is_end
    if word[i] == '.':
        return any(search(ch, word, i+1)
                   for ch in node.children.values())
    if word[i] not in node.children:
        return False
    return search(node.children[word[i]], word, i+1)
258 Trie prefix check (starts_with) 10s

Same as search but returns True when you reach the end of the prefix — no need to check is_end.

def starts_with(root, prefix):
    node = root
    for ch in prefix:
        if ch not in node.children:
            return False
        node = node.children[ch]
    return True