Phantom Code AIPhantom Code AI
FeaturesEarn with UsMy WorkspacePricing
FeaturesEarn with UsMy WorkspacePricing

Meta (Facebook) interview questions

Meta Interview Questions — Back-to-Back Coding, System Design, and the E5/E6 Bar

Meta’s interview is the hardest pace of any FAANG. Two coding rounds run back-to-back as a 90-minute double-header — typically 4 problems, sometimes 5 — and that tempo alone eliminates candidates who can solve the same problems comfortably at Google or Microsoft. Layer in system design, behavioral mapped to Meta’s six cultural values, and a calibrated leveling discussion that decides E5 versus E6, and the loop tests pattern-recognition speed more than raw algorithm depth.

22 real questions below — coding, system design, behavioral, and leveling — written specifically for Meta’s format. Not generic FAANG advice. The follow-ups, the calibration tells, the things that down-level candidates from E6 to E5 — all of it.

About Meta’s pace: If you finish the first coding problem with 30 minutes left in the 45-minute slot, that’s normal — you’ll get a second problem. Train for two clean problems per round, not one perfect one.

How Meta interviews are structured

Most full-time SWE loops at Meta consist of 5 to 6 rounds, scheduled either as a single virtual onsite day or split across two days. The shape:

  • Two coding rounds (45 minutes each, often back-to-back). Each round expects 2 problems — one warm-up at medium difficulty, one harder follow-up. If you only finish one, that’s typically a soft no on that round. Whiteboard-style execution: write the code, walk through complexity, test with edge cases. No autocomplete.
  • System design (60 minutes, E5+ only). Open-ended product-system design. The bar at E5 is “design a working system at Meta scale, articulate trade-offs, identify bottlenecks.” The bar at E6 adds “scope the problem before designing” and “own the trade-off decisions, not just enumerate them.”
  • Behavioral / Jedi round (45 minutes). Run by an experienced engineer or manager. Maps your stories to Meta’s six cultural values. Multiple stories per round; expect the interviewer to pick a thread and pull on it for 5-10 minutes before moving on.
  • Calibration and leveling. No single round decides level. After all rounds complete, the interview team meets, shares written feedback, and converges on a level recommendation. A “down-level” from E6 to E5 is common — and is still a passing outcome, just at a lower offer band.

Coding round questions

Meta runs two coding rounds back-to-back, often as a 90-minute block with 2 problems each. Pace matters as much as correctness. These are the patterns that come up most.

Validate a Binary Search Tree (Meta-favorite, asked in coding round 1).

Meta loves this because the naive solution — checking left < root < right at each node — is wrong, and they want to see if you catch it. A node deeper in the right subtree of the root must still be greater than the root, not just greater than its immediate parent. The clean approach is recursion with min/max bounds passed down: at each node, ensure node.val is strictly greater than the lower bound and strictly less than the upper bound, then recurse left with upper bound updated to node.val, and right with lower bound updated to node.val. Use Long.MIN_VALUE and Long.MAX_VALUE (or null sentinels) to avoid edge cases with Integer.MIN_VALUE in the actual tree. Time O(n), space O(h) for recursion. Meta's follow-up is almost always: do it iteratively using inorder traversal — a valid BST has a strictly increasing inorder sequence.

Given a string, find the longest substring without repeating characters.

This is the canonical Meta sliding-window question. Two pointers, left and right, with a hashmap tracking the last-seen index of each character. Move right forward, and whenever you hit a character that's already in the map AND its last-seen index is greater than or equal to left, jump left to last_seen + 1. Track the max window length as you go. The trap Meta watches for: candidates who reset the entire map when they hit a duplicate (O(n*k) instead of O(n)). Another trap: forgetting that the duplicate's recorded index might be before left already — in which case you don't move left at all. Time O(n), space O(min(n, alphabet_size)). Follow-up: extend to 'longest substring with at most K distinct characters' — same pattern, different shrink condition.

Clone a graph (deep copy of an undirected graph with cycles).

Meta uses this to test BFS/DFS plus hashmap discipline. The trick is the cycle: if you naively recurse, you loop forever. Use a hashmap from original node to cloned node. When you visit a node, check the map first — if already cloned, return the existing clone. Otherwise create a new node, put it in the map immediately (before recursing on neighbors), then iterate the neighbors and recursively clone each, appending to the new node's neighbor list. Both DFS (recursion or stack) and BFS (queue) work; Meta interviewers often ask you to do both or compare them. Common bug: putting the node in the map AFTER recursing instead of before — that re-introduces the infinite loop. Time and space O(V + E).

Merge K sorted lists.

Two correct answers, and Meta will push you between them. The min-heap approach: push the head of every list into a heap, pop the smallest, advance that list's pointer, push the new head. Time O(N log K) where N is total nodes, K is list count. The divide-and-conquer approach: pair lists up, merge each pair (using the standard merge-two-sorted-lists), repeat until one list remains. Same complexity but no heap, better cache behavior, and arguably cleaner code. Meta's favorite follow-up: 'what if the lists are streams of unknown length and you can't fit them in memory?' — now the heap approach wins because divide-and-conquer requires materializing pairs. Be ready to discuss the constant-factor differences and when each wins.

Find the K-th largest element in an unsorted array.

Meta wants you to know three approaches and pick: sort and index (O(n log n), trivial, not impressive), min-heap of size K (O(n log K), good general answer), or quickselect (O(n) average, O(n^2) worst, what Meta wants you to mention). Quickselect is the hashicorp-of-this-question answer: partition like quicksort, but only recurse into the side containing the K-th index. To avoid O(n^2), pick a random pivot or use median-of-medians. Implementation gotcha: people frequently mix up 'k-th largest' (n-k in zero-indexed sorted order) versus 'k-th smallest.' Verify with the interviewer. Meta's follow-up: do it with O(1) extra space — in-place quickselect satisfies that, heap doesn't.

Given a 2D grid of '1's (land) and '0's (water), count the number of islands.

Classic graph traversal on a grid. Iterate every cell. When you find a '1', launch a DFS or BFS that flips all connected '1's to '0' (or marks them in a visited set), then increment the island counter. The directionality is 4-connected (up/down/left/right) by default; Meta interviewers often add 8-connected as a follow-up. Two common bugs: (1) forgetting bounds checks before recursing, leading to ArrayIndexOutOfBounds; (2) not marking the cell visited before recursing, causing re-entry and either a stack overflow or double-counting. Time O(m*n), space O(m*n) worst case for the recursion stack on a fully-connected grid. Follow-up Meta loves: 'what if the grid is too large to fit in memory?' — discuss streaming / chunking with Union-Find at chunk boundaries.

Given an array of integers and a target sum, find all unique triplets that sum to zero (3Sum).

Sort the array first — this is non-obvious and worth saying out loud. Then iterate i from 0 to n-3. For each i, run a two-pointer scan with left = i+1 and right = n-1, looking for nums[left] + nums[right] = -nums[i]. The crucial subtlety is deduplication: skip i if nums[i] == nums[i-1], and after finding a triplet, skip both left and right past their duplicates before continuing. Without those skips you'll return duplicate triplets and Meta will mark you down. Time O(n^2), space O(1) if you don't count the output. Meta follow-ups: 4Sum (one more loop), 3Sum closest (track min diff to target), and 3Sum with arbitrary target instead of zero (exact same algorithm, just change the comparison).

Implement LRU Cache with O(1) get and put.

This is THE Meta system-y coding question — it shows up in nearly every onsite. The combo is a hashmap (key to node) plus a doubly-linked list (ordering by recency). On get: hashmap lookup, then move that node to the head of the list (most recent), return value. On put: if key exists, update and move to head; if not, create node, add to head, add to map; if size exceeds capacity, evict the tail node and remove from map. The doubly-linked list is non-negotiable — it's what makes the move-to-head and evict-tail operations O(1). Common bugs: forgetting to update the map on eviction, mishandling the head/tail sentinels, off-by-one on capacity. Meta's follow-up: extend to LFU (Least Frequently Used) — significantly harder, requires frequency buckets.

Word break: given a string and a dictionary, can the string be segmented into space-separated dictionary words?

DP question Meta uses to test pattern recognition. Let dp[i] mean 'the prefix of length i is segmentable.' Base case dp[0] = true. For each i from 1 to n, check every j < i: if dp[j] is true and the substring from j to i is in the dictionary, set dp[i] = true and break. Convert the dictionary to a hash set for O(1) lookups. Time O(n^2 * L) where L is average word length (the substring hash). Meta's standard follow-up: word break II — return all possible segmentations, not just whether one exists. That requires storing the actual splits and gets exponential in the worst case ('aaaa...' with words 'a' and 'aa'); use memoization on suffixes to avoid redundant work, and tell the interviewer the worst case is unavoidable when output size itself is exponential.

Given a binary tree, find the lowest common ancestor (LCA) of two given nodes.

The elegant recursive solution: if the current node is null, or equals p, or equals q, return it. Otherwise recurse on left and right. If both recursive calls return non-null, current node is the LCA. If only one returns non-null, that's the LCA (or the path to it). Time O(n), space O(h) for recursion. Meta loves this because the recursion is short but the reasoning is subtle — you need to articulate why returning the node when it equals p OR q (without checking the other) is correct. The intuition: that returned value either IS the LCA (if the other node is in its subtree) or it bubbles up to a higher node where the other subtree's result joins it. Follow-ups: LCA in a BST (use ordering, no recursion needed), LCA when nodes have parent pointers (intersection-of-two-linked-lists trick), LCA of N nodes.

System design questions

60-minute round, one large system. Meta's product portfolio means most prompts map to something they actually run. Be ready to discuss real Meta-scale numbers.

Design Facebook News Feed.

Meta will absolutely ask this if you interview for any product-side role. The core tension is fanout-on-write versus fanout-on-read. Fanout-on-write: when a user posts, push the post into every follower's feed cache. Reads are O(1) — just look up the cached feed. But writes are O(followers), which breaks for celebrities with 10M+ followers (the 'Justin Bieber problem'). Fanout-on-read: don't precompute; on feed-read, fetch posts from all followed users and merge. Reads are expensive but writes are cheap. The real Meta answer is hybrid: fanout-on-write for normal users, fanout-on-read for high-follower users, merge the two streams at read time. Components: posts service, follow graph, feed-generation workers (Kafka-driven), feed cache (Redis), ranking service (ML model that re-ranks the candidate set by predicted engagement). Storage: posts in a wide-column store like Cassandra or HBase, follow edges in a graph database or sharded MySQL. Discuss eventual consistency (a new post might take seconds to appear in followers' feeds), pagination via cursors not offsets, and how to A/B test ranking models. Meta interviewers care that you mention ranking — feed isn't chronological, it's ML-ranked, and that drives most of the system's complexity.

Design Instagram Stories.

Different tradeoffs from feed because of the 24-hour expiry and the strong locality of access (most views happen in the first few hours). Components: upload service (handles photo/video, transcodes to multiple resolutions), CDN (critical — stories are video-heavy and bandwidth-dominated), metadata store (story ID, owner, expiry, viewer list), story-feed service that returns the ordered list of unviewed-stories per user. The expiry is interesting: don't actually delete on hour 24 — soft-delete in metadata and let a background job clean up media after some grace period. Viewer tracking is high-write: when someone watches a story, you write a (story_id, viewer_id, timestamp) row. With a billion users this is heavy; batch writes through Kafka and aggregate. For the ordering of which stories to show: most-recent-first within unviewed, then collapse viewed stories. Pre-fetching is critical for UX — when a user opens a story, pre-fetch the next 2-3 in the background so playback feels instant. Meta will push on: how do you serve videos at multiple resolutions without storing 5 copies of every video (answer: ABR streaming with HLS or DASH), and how do you handle the global CDN edge invalidation when a user deletes a story early.

Design WhatsApp messaging.

End-to-end encryption is the headline, but the real system-design substance is delivery semantics and presence at scale. Architecture: persistent connections from clients to edge servers (WebSocket or XMPP-like protocol). Edge servers route messages through a backend message-bus to the recipient's edge server. If the recipient is offline, the message goes into a per-user queue (cassandra or similar) and is delivered on reconnect. For E2E encryption, the server only sees ciphertext — the Signal Protocol handles the key exchange, with prekeys stored server-side so first contact works even when the recipient is offline. Discuss the double-ratchet for forward secrecy. The two-checkmark / blue-checkmark UX is implemented via delivery receipts and read receipts, both small messages flowing back through the same channel. For groups, do you fan-out at sender or at server? WhatsApp does sender-side fanout for E2E groups (each member gets a separately-encrypted copy), which limits group size — and that's a real design constraint Meta wants you to identify. Presence (online/offline/last-seen) is a heavy write workload; aggregate, throttle, and use eventually-consistent stores. Voice/video calling is a separate WebRTC subsystem with TURN servers for NAT traversal — mention but don't dive deep unless asked.

Design Facebook Live streaming.

Live streaming is the hardest of the four because latency budgets are tight. Two phases: ingest and distribution. Ingest: streamer's client uploads a live RTMP or WebRTC stream to the nearest ingest server. The ingest server transcodes to multiple bitrates (1080p, 720p, 480p, 240p) for ABR, segments into 2-6 second chunks, and pushes to origin storage. Distribution: chunks are pushed to a CDN; viewers pull HLS or DASH segments from the nearest edge. End-to-end latency is the killer — naive HLS gives 20-30 second lag; reduced-segment HLS or LL-HLS / WebRTC for sub-3-second is what you want for interactive streams (live Q&As). Discuss the trade between latency and buffering robustness — shorter segments mean more frequent stalls on flaky networks. The chat overlay is a separate pub-sub system (Kafka or similar) with per-stream topics; rate-limit and aggregate when concurrent commenters exceed thousands. Recording: the same chunks that go to viewers are also persisted to long-term storage for VOD playback after the stream ends. Scale concern: a stream with 10M concurrent viewers requires the CDN to fan out from a handful of origins; this is where Meta's edge infrastructure (Facebook Edge / CDN points-of-presence) earns its keep. Mention regional sharding — viewers in Asia hit Asian edges, not the origin in California.

Behavioral questions

45-minute round, mapped to Meta's cultural values: Move Fast, Be Bold, Focus on Long-Term Impact, Build Awesome Things, Live in the Future, Meta Metamates Me. Stories should be specific and recent.

Tell me about a time you moved fast and broke something. (Direct reference to Meta's old motto 'Move fast and break things.')

Meta wants to see that you can ship aggressively AND own the consequences. The bad version of this answer is 'I shipped fast, nothing broke, everything was great' — that's not the question. The good version: pick a real moment where speed cost something. Structure: (1) the pressure to move fast (clear deadline, competing project, customer urgency); (2) the specific shortcut you took (skipped a manual QA pass, relied on partial test coverage, deployed without a feature flag); (3) what broke (latency spike, wrong analytics numbers, a small subset of users seeing the wrong thing); (4) how you detected it and how fast (monitoring caught it in 30 minutes, customer reported it before monitoring did — be honest); (5) what you fixed AND what you systematized so the same shortcut wouldn't break the same thing twice. Meta's value isn't 'break things' — the updated version is 'Move Fast With Stable Infra.' The answer should reflect that evolution: speed is the goal, but the way you make speed sustainable is the story.

Describe a time you owned a project end-to-end.

Meta's E5+ bar requires demonstrated ownership beyond just writing code. Strong answers cover the full arc: how the project was identified (you spotted the need, or you inherited a vague mandate and scoped it), how you wrote the proposal or design doc, how you got buy-in from cross-functional partners (PM, design, partner teams, security), how you broke down the work and either did it solo or led a small team, how you handled the inevitable scope changes, how you measured success (concrete metric — latency, engagement, revenue, infra cost), and how you communicated outcomes to leadership. The trap candidates fall into: claiming end-to-end ownership but only describing the implementation phase. If your story starts with 'I was assigned the task to build X' and ends at 'I shipped it,' that's not E5 ownership, that's project execution. Meta wants the messy parts: the moment when a key dependency dropped out, or a stakeholder pushed back, or the original metric didn't move and you had to pivot.

Tell me about a time you disagreed with your manager or skip-level.

This is a Meta favorite because their culture explicitly values 'Be Bold' and 'Be Direct.' The wrong move is to either (a) describe a situation where you disagreed and immediately deferred, or (b) describe a situation where you fought your manager publicly and won. Both are red flags. The right structure: state the disagreement specifically and technically (e.g., 'my manager wanted to ship feature X with a flag-only rollout to 1%; I argued for 50% based on the actual risk profile'), describe how you raised it (1:1, with data, with a written argument), describe the resolution — and critically, describe what you did if you didn't win. The mature answer often ends 'we shipped it their way, here's what happened, here's what I learned about my own confidence calibration.' Meta values disagreement that's evidence-driven and contained to the right forum, plus the maturity to commit fully once a decision is made (the 'disagree and commit' pattern Amazon also uses).

Tell me about a time you had to push back on a deadline.

Meta moves fast and that means deadlines are aggressive — and sometimes unrealistic. They want to see that you can push back without being the engineer who always says 'this will take 6 months.' Structure your answer as a negotiation, not a refusal: explain the original ask, explain why the original timeline wasn't achievable (specifically — staffing, dependencies, risk, scope), present the alternatives you offered (reduced scope, phased rollout, different staffing, accepting more risk in some component), and explain the outcome and whether the renegotiated timeline held. The signal Meta is looking for: do you treat deadlines as collaborative constraints or as immutable orders? Strong candidates also describe the relationships they preserved — pushing back without burning bridges with the PM or partner team is the actual skill. End with what you'd do differently next time — often the answer is 'flag the risk earlier; I waited two weeks longer than I should have.'

Tell me about a time you gave someone difficult feedback.

At E5+, Meta expects you to have done this — peer feedback at performance review time is mandatory and often substantive. Pick a real instance: a peer or report whose code quality was below the team bar, or who was missing meetings, or whose technical direction you disagreed with. Describe the lead-up (you observed the pattern over weeks, not minutes), the way you delivered it (1:1, in person or video, framed around impact rather than personality), how the recipient reacted, and what changed afterward. The signal Meta wants: directness with empathy. The trap: telling a story where you delivered feedback that was technically correct but the person didn't change, and treating that as success. The better arc shows iteration — your first delivery didn't land, you re-framed, eventually they shifted. If the person didn't change, be honest about that and what you escalated.

E5 / E6 / E7 leveling questions

Meta's level decision rides primarily on system design and behavioral. These are the questions that actually decide E5 versus E6 versus down-level.

What's the difference between E5 (Senior) and E6 (Staff) at Meta, and how does the interview reflect it?

E5 is expected to own a full project on a single team with limited supervision. E6 is expected to drive ambiguous, multi-quarter initiatives that span multiple teams or products, often without formal authority over the people executing. The interview reflects this in three concrete ways. First, the system-design round at E6 is more open — instead of 'design Instagram stories,' it might be 'we need to reduce mobile data usage globally by 20%, where would you start' — the candidate has to scope the problem before designing anything. Second, the behavioral round at E6 weighs cross-functional influence heavily: how did you align three teams without being their manager? Third, the coding rounds are the same problems — Meta does not raise the coding bar from E5 to E6 — but interviewers expect cleaner first drafts and more proactive complexity analysis. The level decision is mostly made on system design and behavioral signal. Many candidates 'down-level' from E6 to E5 not because their coding was weak but because their cross-team examples didn't clear the bar.

How does Meta evaluate 'leading without authority' for senior+ candidates?

This is the central E6/E7 question. Meta's matrix structure means senior engineers regularly drive work across teams they don't manage. The interview probes for it through behavioral questions like 'tell me about an initiative you drove that required getting buy-in from a team that didn't report to you,' and the bar answer covers four things: (1) how you identified the problem and made the case it was worth solving — usually with data, not opinion; (2) how you mapped the stakeholders and figured out who actually needed to say yes (often a small number of senior ICs and one or two managers, not the whole team); (3) how you negotiated trade-offs when the partner team had different priorities — what did you offer in exchange for their engagement, did you take on work they would otherwise own, did you escalate to leadership and if so, how; (4) how you measured success and reported back to the partner teams afterward (the maintenance of the relationship matters as much as the outcome). Candidates who describe themselves as the 'driver' but can't articulate the partner team's incentives usually fail this round.

What does Meta look for in the E5/E6 'scope and ambiguity' question?

Meta interviewers commonly ask: 'tell me about the most ambiguous problem you've worked on.' The signal is whether you can convert ambiguity into structured work without freezing or over-asking for clarification. Strong answers describe a problem that genuinely had no clear definition at the start (e.g., 'our team was told to reduce churn but no one had defined which segment of users or what counted as churn'), and walk through the structuring process: how you broke it into sub-problems, how you ran cheap experiments to validate hypotheses before investing real engineering, how you communicated the chosen scope upward so leadership could course-correct early, and how you re-scoped midway when the data didn't match the original hypothesis. The bar at E6 is significantly higher than E5 here: E5 candidates can describe ambiguity within a project, E6 candidates need to describe ambiguity at the project-selection level — i.e., 'I wasn't told what to work on, I figured out what to work on and got the team aligned.' Down-leveling from E6 to E5 happens disproportionately on this question.

Reading the questions is step one. The pace is step two.

Meta’s interview punishes hesitation more than any other FAANG loop. You can know the LRU Cache answer cold and still fail the round because you took 12 minutes on the warm-up. The candidates who land Meta offers practice the second problem of the double-header — the one that comes after you’re already tired. That’s the round that decides it.

For the live round itself, PhantomCode AI is the real-time copilot. It listens to the question, surfaces the optimal pattern, and stays invisible to the interviewer’s screen-share. Built specifically for Meta-style pace — sub-second pattern recognition on the second problem when you’ve already spent your best thinking on the first.

Try the interview copilotOther company question banks