The patterns that show up in every coding interview. Eight questions covering the highest-yield techniques. If you can fluently recognize and apply these, you'll pass most L3–L5 coding rounds.
Q1.What is the two-pointer pattern, and when should you reach for it?
Two pointers is a technique where you maintain two indices into a sequence and move them based on a condition — usually one from each end, or both from the start at different speeds. It collapses what would be an O(n²) nested loop into a single O(n) pass. The classic case is finding a pair in a sorted array that sums to a target: place one pointer at index 0 and another at the last index, then move them inward depending on whether the current sum is below or above the target. The same shape solves container-with-most-water, three-sum (after sorting), and removing duplicates in place. The key precondition is some kind of monotonicity — usually a sorted array — that lets you discard half the search space with each step.
Q2.How does the sliding-window pattern work, and what problems does it solve?
A sliding window maintains a contiguous range [left, right] over an array or string, expanding the right edge to include new elements and contracting the left edge when a constraint is violated. It is the right tool whenever the question asks for the longest, shortest, or count of contiguous subarrays satisfying some property. The canonical example is the longest substring without repeating characters: extend right, store each character's last index in a hash map, and when you see a repeat, jump left to one past that index. Variable-size windows handle constraints like at-most-K distinct characters. Fixed-size windows handle things like maximum sum of any K consecutive elements. The window only moves forward, giving you O(n) overall.
Q3.When do you use BFS versus DFS on a graph or tree?
BFS explores level by level using a queue and is the default whenever you need the shortest path in an unweighted graph or the minimum number of steps to reach a state — word ladder, shortest bridge, rotting oranges, knight on a chessboard. DFS uses a stack (or recursion) and dives deep before backtracking. Reach for DFS when you need to enumerate paths, detect cycles, do topological sort, or compute connected components. On trees, DFS gives you elegant recursive solutions for subtree aggregates (max path sum, lowest common ancestor) while BFS is natural for level-order printing or right-side view. A good rule: if the question mentions shortest, fewest, or minimum steps, start with BFS; otherwise DFS is usually simpler.
Q4.How do you recognize a dynamic-programming problem and structure a solution?
DP applies when a problem has overlapping subproblems and optimal substructure — the same smaller question is asked many times, and the answer for size n can be built from answers for smaller sizes. The tell is wording like maximum, minimum, count the number of ways, or can you reach. Structure the solution in three steps: define the state precisely (e.g., dp[i] = longest increasing subsequence ending at index i), write the recurrence (dp[i] = 1 + max(dp[j]) for j < i where nums[j] < nums[i]), then choose top-down memoization or bottom-up tabulation. Classic warmups: climbing stairs, house robber, coin change, longest common subsequence, edit distance. Most interview DP problems are 1D or 2D state with a small constant of transitions.
Q5.How should you think about recursion in an interview, and when does it become problematic?
Recursion is the right model whenever a problem decomposes naturally into smaller versions of itself — trees, divide-and-conquer, backtracking, generating permutations or subsets. Define the base case, define what you return for a single subproblem, and trust the recursion to handle the rest. The trap is implicit stack depth: a recursive solution on a balanced tree of a million nodes is fine (depth ~20), but a recursive walk over a linked list of a million nodes will overflow. In Python the default limit is around 1000 frames. If depth is a concern, convert to iteration with an explicit stack, or rewrite tail-recursive calls as loops. Watch out for redundant recomputation — that is usually the signal to memoize and turn it into DP.
Q6.What is binary search beyond sorted arrays, and how does it generalize?
Binary search is not just for finding an element in a sorted array — it works on any monotonic predicate over a range. The pattern is: define a function check(x) that returns true for all x at or above some threshold, and false below it (or vice versa), then binary search the range to find the boundary. This shape solves problems like minimum capacity to ship packages in D days, smallest divisor given a sum constraint, or median of two sorted arrays. The hardest part is recognizing the monotonicity. Once you see it, the implementation is mechanical: maintain low and high, compute mid, call check(mid), shrink the half that cannot contain the answer. Watch off-by-one errors at the boundary — preferring [low, high) half-open ranges helps.
Q7.How does backtracking differ from brute force, and where does it shine?
Backtracking is brute force with pruning. You build a candidate solution incrementally, and as soon as the partial candidate cannot lead to a valid full solution, you abandon it and rewind. The framework is: choose, recurse, unchoose. It is the right tool for combinatorial search — N-queens, sudoku, generating all valid parentheses, word search in a grid, partition into K equal-sum subsets. The complexity is exponential in the worst case, but pruning often makes it tractable on real inputs. Two implementation tips: mutate a single shared state and undo on the way back rather than copying lists, and order your choices to put the most constrained ones first so failure happens early. That ordering can turn a TLE into a sub-second solution.
Q8.What's the role of heaps and priority queues in coding interviews?
A heap is the right data structure whenever the question asks for the K largest, K smallest, K closest, or anything where you repeatedly need the current min or max. Top-K problems are the canonical case: maintain a min-heap of size K; for each new element, push and pop if size exceeds K. The result is O(n log K) instead of O(n log n) full sort. Heaps also power Dijkstra's shortest path, the merge-K-sorted-lists pattern, and the running-median problem (one max-heap for the lower half, one min-heap for the upper half). Know that Python's heapq is a min-heap by default — to simulate a max-heap, push negated values. Custom comparators can be done by pushing tuples where the first element is the priority.