Meta runs two coding rounds back-to-back, often as a 90-minute block with 2 problems each. Pace matters as much as correctness. These are the patterns that come up most.
Validate a Binary Search Tree (Meta-favorite, asked in coding round 1).
Meta loves this because the naive solution — checking left < root < right at each node — is wrong, and they want to see if you catch it. A node deeper in the right subtree of the root must still be greater than the root, not just greater than its immediate parent. The clean approach is recursion with min/max bounds passed down: at each node, ensure node.val is strictly greater than the lower bound and strictly less than the upper bound, then recurse left with upper bound updated to node.val, and right with lower bound updated to node.val. Use Long.MIN_VALUE and Long.MAX_VALUE (or null sentinels) to avoid edge cases with Integer.MIN_VALUE in the actual tree. Time O(n), space O(h) for recursion. Meta's follow-up is almost always: do it iteratively using inorder traversal — a valid BST has a strictly increasing inorder sequence.
Given a string, find the longest substring without repeating characters.
This is the canonical Meta sliding-window question. Two pointers, left and right, with a hashmap tracking the last-seen index of each character. Move right forward, and whenever you hit a character that's already in the map AND its last-seen index is greater than or equal to left, jump left to last_seen + 1. Track the max window length as you go. The trap Meta watches for: candidates who reset the entire map when they hit a duplicate (O(n*k) instead of O(n)). Another trap: forgetting that the duplicate's recorded index might be before left already — in which case you don't move left at all. Time O(n), space O(min(n, alphabet_size)). Follow-up: extend to 'longest substring with at most K distinct characters' — same pattern, different shrink condition.
Clone a graph (deep copy of an undirected graph with cycles).
Meta uses this to test BFS/DFS plus hashmap discipline. The trick is the cycle: if you naively recurse, you loop forever. Use a hashmap from original node to cloned node. When you visit a node, check the map first — if already cloned, return the existing clone. Otherwise create a new node, put it in the map immediately (before recursing on neighbors), then iterate the neighbors and recursively clone each, appending to the new node's neighbor list. Both DFS (recursion or stack) and BFS (queue) work; Meta interviewers often ask you to do both or compare them. Common bug: putting the node in the map AFTER recursing instead of before — that re-introduces the infinite loop. Time and space O(V + E).
Merge K sorted lists.
Two correct answers, and Meta will push you between them. The min-heap approach: push the head of every list into a heap, pop the smallest, advance that list's pointer, push the new head. Time O(N log K) where N is total nodes, K is list count. The divide-and-conquer approach: pair lists up, merge each pair (using the standard merge-two-sorted-lists), repeat until one list remains. Same complexity but no heap, better cache behavior, and arguably cleaner code. Meta's favorite follow-up: 'what if the lists are streams of unknown length and you can't fit them in memory?' — now the heap approach wins because divide-and-conquer requires materializing pairs. Be ready to discuss the constant-factor differences and when each wins.
Find the K-th largest element in an unsorted array.
Meta wants you to know three approaches and pick: sort and index (O(n log n), trivial, not impressive), min-heap of size K (O(n log K), good general answer), or quickselect (O(n) average, O(n^2) worst, what Meta wants you to mention). Quickselect is the hashicorp-of-this-question answer: partition like quicksort, but only recurse into the side containing the K-th index. To avoid O(n^2), pick a random pivot or use median-of-medians. Implementation gotcha: people frequently mix up 'k-th largest' (n-k in zero-indexed sorted order) versus 'k-th smallest.' Verify with the interviewer. Meta's follow-up: do it with O(1) extra space — in-place quickselect satisfies that, heap doesn't.
Given a 2D grid of '1's (land) and '0's (water), count the number of islands.
Classic graph traversal on a grid. Iterate every cell. When you find a '1', launch a DFS or BFS that flips all connected '1's to '0' (or marks them in a visited set), then increment the island counter. The directionality is 4-connected (up/down/left/right) by default; Meta interviewers often add 8-connected as a follow-up. Two common bugs: (1) forgetting bounds checks before recursing, leading to ArrayIndexOutOfBounds; (2) not marking the cell visited before recursing, causing re-entry and either a stack overflow or double-counting. Time O(m*n), space O(m*n) worst case for the recursion stack on a fully-connected grid. Follow-up Meta loves: 'what if the grid is too large to fit in memory?' — discuss streaming / chunking with Union-Find at chunk boundaries.
Given an array of integers and a target sum, find all unique triplets that sum to zero (3Sum).
Sort the array first — this is non-obvious and worth saying out loud. Then iterate i from 0 to n-3. For each i, run a two-pointer scan with left = i+1 and right = n-1, looking for nums[left] + nums[right] = -nums[i]. The crucial subtlety is deduplication: skip i if nums[i] == nums[i-1], and after finding a triplet, skip both left and right past their duplicates before continuing. Without those skips you'll return duplicate triplets and Meta will mark you down. Time O(n^2), space O(1) if you don't count the output. Meta follow-ups: 4Sum (one more loop), 3Sum closest (track min diff to target), and 3Sum with arbitrary target instead of zero (exact same algorithm, just change the comparison).
Implement LRU Cache with O(1) get and put.
This is THE Meta system-y coding question — it shows up in nearly every onsite. The combo is a hashmap (key to node) plus a doubly-linked list (ordering by recency). On get: hashmap lookup, then move that node to the head of the list (most recent), return value. On put: if key exists, update and move to head; if not, create node, add to head, add to map; if size exceeds capacity, evict the tail node and remove from map. The doubly-linked list is non-negotiable — it's what makes the move-to-head and evict-tail operations O(1). Common bugs: forgetting to update the map on eviction, mishandling the head/tail sentinels, off-by-one on capacity. Meta's follow-up: extend to LFU (Least Frequently Used) — significantly harder, requires frequency buckets.
Word break: given a string and a dictionary, can the string be segmented into space-separated dictionary words?
DP question Meta uses to test pattern recognition. Let dp[i] mean 'the prefix of length i is segmentable.' Base case dp[0] = true. For each i from 1 to n, check every j < i: if dp[j] is true and the substring from j to i is in the dictionary, set dp[i] = true and break. Convert the dictionary to a hash set for O(1) lookups. Time O(n^2 * L) where L is average word length (the substring hash). Meta's standard follow-up: word break II — return all possible segmentations, not just whether one exists. That requires storing the actual splits and gets exponential in the worst case ('aaaa...' with words 'a' and 'aa'); use memoization on suffixes to avoid redundant work, and tell the interviewer the worst case is unavoidable when output size itself is exponential.
Given a binary tree, find the lowest common ancestor (LCA) of two given nodes.
The elegant recursive solution: if the current node is null, or equals p, or equals q, return it. Otherwise recurse on left and right. If both recursive calls return non-null, current node is the LCA. If only one returns non-null, that's the LCA (or the path to it). Time O(n), space O(h) for recursion. Meta loves this because the recursion is short but the reasoning is subtle — you need to articulate why returning the node when it equals p OR q (without checking the other) is correct. The intuition: that returned value either IS the LCA (if the other node is in its subtree) or it bubbles up to a higher node where the other subtree's result joins it. Follow-ups: LCA in a BST (use ordering, no recursion needed), LCA when nodes have parent pointers (intersection-of-two-linked-lists trick), LCA of N nodes.