Phantom Code AIPhantom Code AI
FeaturesEarn with UsMy WorkspacePricing
FeaturesEarn with UsMy WorkspacePricing
Home/Interview Questions/Engineering Manager

By role / Engineering Manager

Engineering Manager Interview Questions — People, Process, and Technical Decisions

Engineering manager rounds blend technical depth with management judgment, and that mix is what makes the loop hard to prepare for. The same candidate who could pass a staff IC loop on Monday can fail an EM loop on Friday — not because their coding got worse, but because the rubric is now scoring people decisions, scope ownership, and stakeholder calibration alongside the design board. This page collects the question types that come up most often across first-time EM, senior EM, and director-track loops, with structured answers in the voice interviewers actually reward.

How EM interviews are structured

Most companies run a five-to-six-round EM loop. The exact split varies, but the buckets are remarkably consistent across Google, Meta, Amazon, Stripe, and the next tier of companies down. Expect the following:

  • Technical / coding. Yes, you are still expected to code. The bar is usually one notch easier than a senior IC bar, but rust shows. Most loops include one live coding round in the manager track.
  • System design. Same bar as a senior IC at most companies, sometimes higher because EMs are expected to articulate trade-offs to non-engineers.
  • Hiring & team building. How you build a team, how you read signal in an interview loop, how you balance levels and trajectories.
  • Performance management. The hardest round for first-time managers. Interviewers look for direct experience with low performers, hard reviews, and exits.
  • Project planning. Roadmap, scope trade-offs, dependency management, deadline negotiation.
  • Executive communication. Skip-level or director-led round. How you write up a project, how you signal risk, how you ask for headcount.

The trap for staff ICs moving into this loop is over-indexing on the technical rounds. The differentiator at the EM level is almost never the design diagram — it's the people stories with specifics, and the ability to make a trade-off out loud.

On this page

  • People management (7)
  • Hiring & team building (4)
  • System design at scale (3)
  • Project management (3)
  • Strategic & executive (3)

People management Q&A

The core of every EM loop. Interviewers are watching for direct experience giving hard feedback, structured judgment under ambiguity, and the ability to separate behavior from circumstance.

How would you handle a low performer on your team?

Start by separating performance from circumstance. Before labeling someone a low performer, I confirm three things: clear expectations were set in writing, the engineer has the context and tools they need, and there is no personal or team-level disruption that explains the dip. Once I've ruled those out, I move to a structured conversation — not a surprise. I share specific examples (two or three, not a pattern lecture), describe the impact, and ask for their read on what's getting in the way. From there we agree on a 30-day plan with measurable checkpoints: scope of work, definition of done, and weekly 1:1 reviews. If the gap closes, we close the loop publicly so they regain trust. If it doesn't, I move into a formal performance improvement plan with HR, but I make sure the engineer has heard every concern from me long before the document arrives. The worst outcome is firing someone who never knew they were underperforming — it signals to the rest of the team that feedback isn't real until it's terminal.

Describe a difficult performance review you gave.

I had a senior engineer who was technically excellent but consistently dismissive in code review. Junior engineers had stopped submitting PRs to him because the feedback felt punitive. The difficulty was that his code shipped, his designs were sound, and on paper he was a top performer — but the team's velocity was being silently taxed. I went into the review with two specific incidents documented and the team's calibration data: review turnaround time on his PRs versus others, attrition signals from skip-levels. I framed it as a leverage problem rather than a character flaw — at his level, his impact is bounded by how others grow around him. He pushed back hard at first. I held the line, gave him 90 days to demonstrate improved review tone with concrete behaviors (asking questions before declaring fixes, acknowledging good design before flagging issues), and offered coaching support. He course-corrected, and within two cycles I was able to write a much stronger promotion case for him because his sphere of influence had genuinely grown.

How do you give feedback to a senior engineer who outranks you in tenure or technical depth?

I lead with curiosity, not authority. Senior engineers can smell positional feedback from a mile away — and they're usually right that the manager doesn't have the deepest technical context. So I anchor feedback in observable outcomes the engineer also cares about: a design that two on-call rotations had to patch, a doc the team can't navigate, a meeting where a decision wasn't made. I describe what I saw, what the second-order effect was, and ask what their read is. If they disagree, I treat that as legitimate input — sometimes I'm wrong, sometimes I'm missing context. If they agree but resist changing behavior, I get specific about what 'better' looks like at their level: not 'be nicer' but 'before disagreeing in a meeting, restate the proposal in your own words first.' The mistake new managers make with senior ICs is hedging — softening feedback so much that the engineer leaves the room thinking everything is fine. Senior engineers respect directness more than seniority.

Tell me about a time you had to fire someone.

I let go of a mid-level engineer about 14 months into my first manager role. He had been on a PIP for 60 days, missed two of three milestones, and the third was checked off only because his teammate had quietly rewritten his work. The decision wasn't hard by the time it arrived — what was hard was owning that I had let it go too long. I should have had the first hard conversation three months earlier. When I made the call, I did three things deliberately: I gave him the news first thing on a Monday so he had the full week to plan, not on a Friday afternoon. I had HR in the room but I delivered the message myself, including the specific reasons. And I gave him a generous severance window and a referral commitment for roles that suited his actual strengths, which were real — they just weren't the strengths the role demanded. The team knew within an hour. I told them what I was allowed to say, acknowledged that everyone had been carrying his load, and announced the backfill plan that day. Trust on the team went up, not down, because the silent tax was finally removed.

How do you measure team health?

Velocity and incident rate are lagging indicators — by the time they move, the team is already in trouble. I track four leading signals. First, 1:1 sentiment trend: I keep a private rubric per engineer scored each 1:1 (engaged / neutral / disengaged) and watch the slope, not the absolute. Second, voluntary collaboration: how often do engineers pair, review each other's PRs, or volunteer for ambiguous work? When that drops, fear is up. Third, on-call experience: I read every incident retro and look for blame language and weekend pages. Fourth, the gap between what people say in 1:1s and what they say in team meetings — when those diverge sharply, psychological safety is eroding. I supplement with a quarterly anonymous pulse, but I don't lead with it. The number tells you something is wrong; the conversations tell you what.

How do you handle a conflict between two strong engineers on your team?

First, I get the conflict out of Slack and into a synchronous conversation — async escalates because tone is missing. I meet with each privately, not to take sides but to understand what each one believes the disagreement is about. Often it's not the technical question they're arguing — it's a status, ownership, or autonomy concern dressed up as a technical debate. Once I understand both views, I bring them together with a clear frame: 'Here's the decision we owe the team by Friday. Here's what I heard from each of you. What would change your mind?' My job in the room is to keep the conversation on the artifact, not the person. If they can converge, great — we document the decision and move on. If they can't, I make the call myself, explain the reasoning, and ask both of them to commit to it publicly. Disagree-and-commit only works if commit is visible.

How do you support an engineer who wants to be promoted but isn't ready?

I separate the conversation into two threads: where they are today against the next-level rubric, and what concrete artifacts would close the gap. The mistake is to either over-promise ('absolutely, by next cycle') or to deflect ('keep working hard'). Neither helps the engineer make a real decision about how to spend the next six months. I share the rubric line by line, mark which lines they're already exceeding, which they're meeting, and which are not yet visible. For the not-yet-visible lines I propose specific projects that would create that evidence — not assignments I invented, but real work the team needs. Then I make a calibration prediction: 'If you ship X and Y at this quality level, I am prepared to advocate for you next cycle. I cannot guarantee committee outcomes.' That's an honest contract. Some engineers leave that conversation energized. A few realize they don't actually want what the next level requires, which is also a valid outcome.

Hiring & team building Q&A

First-time EM candidates often underweight this round. Hiring is the single most leveraged decision a manager makes — a single bad hire taxes a team for a year.

How do you decide who to hire?

I optimize for two things in this order: will this person raise the bar of the team, and will the team raise the bar of this person. The first is the obvious one — does the candidate bring a skill, perspective, or operating habit the team currently lacks. The second matters more than people admit — a strong candidate joining a team that can't grow them will plateau and leave within a year. So I evaluate signal across the loop, but I also evaluate fit-for-trajectory: where is this person trying to go in three years, and is this team's roadmap on the path. In debrief I push the panel away from 'I liked them' or 'they were sharp' toward specific evidence. I also weight the dissent vote heavily. If one strong interviewer has a structured concern, I dig in rather than overruling — false positives in hiring are far more expensive than false negatives.

How do you balance senior and junior engineers on a team?

My target ratio depends on the team's mandate. For a team owning a stable surface with high reliability requirements, I lean roughly 60 percent senior, 30 percent mid, 10 percent junior — too many juniors and the on-call rotation becomes painful. For a team building greenfield product, I'll push closer to 40 percent senior, 40 percent mid, 20 percent junior because juniors and mids ship faster on ambiguous problems than seasoned engineers who over-design. The bigger trap is mistaking title for shape. Two staff engineers who both want to design but neither wants to mentor will starve a team of growth. I look at the operating shape — who designs, who reviews, who debugs production, who mentors — and hire to fill the missing shape, not the missing level.

Describe how you'd build a team from scratch.

First 30 days: I do not hire. I spend that time understanding the charter, the stakeholders, and the existing surface area. I write down what success looks like at six months in concrete artifacts — what ships, what's measurable, what's deprecated. Days 30 to 60: I hire the first two engineers, both senior, both generalists, both people I have either worked with before or have unusually strong references for. The first two hires set the cultural defaults — code review tone, doc-writing habits, on-call rigor — for everyone who comes after. Days 60 to 120: I hire breadth. A specialist where the surface demands it (data, infra, frontend), one mid-level engineer to absorb operational load, and a junior if growth-headcount is part of the charter. Throughout, I resist the pressure to hire fast. A wrong hire in the first five poisons the team for a year. I'd rather be three months behind on roadmap than carry a misfire.

What do you look for in a hiring loop signal that others miss?

Two specific things. First, how the candidate talks about people who weren't in the room. If a candidate's stories all cast themselves as the rescuing hero and their colleagues as obstacles, that pattern will replay on my team. The strongest signal is when a candidate gives credit specifically and unprompted. Second, how they handle being wrong inside the interview. I deliberately push back on a claim, even if I agree with them. The candidate who restates their position more confidently is harder to coach than the one who pauses, considers, and either updates or holds the line with new reasoning. Engineers who can't update in a 45-minute interview won't update in a year of code review.

System design at scale Q&A

EMs are still expected to operate at staff IC depth on architecture. The bar shifts from 'can you design it' to 'can you make the right trade-offs and explain them to non-engineers'.

Design a multi-region deployment for a global SaaS application.

I start by surfacing the requirements that drive the architecture, not the architecture itself. Read latency target per region, write consistency requirements, regulatory data residency, RTO and RPO, and cost ceiling. Without those, multi-region debates devolve into religion. Assuming a typical SaaS read-heavy workload with eventual consistency tolerance for non-financial data: I run active-active for stateless services behind a global load balancer with health-check-driven failover, regional Kubernetes clusters, and a per-region cache tier. For data, I split the surface — strongly-consistent transactional data lands in a primary region with read replicas in others (engineers must mark cross-region reads explicitly, not by accident); eventually-consistent data (user content, search indexes) replicates async with conflict resolution defined per surface. Identity and billing stay in the primary region with read-through caches at the edge. I'd push back hard on going multi-region for write-availability before the product has revenue justifying the operational cost — most teams underestimate the engineering tax of cross-region replication by 3x.

Design a microservices migration from a monolith.

The biggest failure mode here is doing it in the wrong order. I do not start by extracting services. I start by establishing the seams — bounded contexts, shared database tables that need to split, ownership boundaries. The first deliverable is a dependency map and a target topology, not code. The second deliverable is the strangler fig pattern: a routing layer that can send a request to either the monolith or the new service based on configuration, with full observability on both paths. Only then do I extract — typically starting with a leaf domain that has the cleanest data ownership and the lowest cross-cutting dependencies. I do not extract the most painful service first; I extract the most learnable one, so the team builds the migration playbook on a forgiving target. Each extraction follows the same pattern: dual-write, dual-read with comparison, switch reads, switch writes, retire the monolith path. I budget 6 to 9 months per major extraction and tell stakeholders that early — the failure mode is leadership treating the migration as a one-quarter project.

Design developer tooling for a 100-engineer team.

At 100 engineers, the tax of bad tooling is roughly two hours per engineer per day — call it 25 percent of capacity. So tooling investment isn't a nice-to-have, it's the highest-leverage spend. I'd anchor on four pillars. First, build and test feedback loops: every engineer should be able to run the relevant test slice locally in under 60 seconds, full CI in under 15 minutes. Below that and people batch their work; above and they context-switch. Second, deployment safety: progressive rollout with automated rollback on metric regression, so engineers ship without dread. Third, observability with sane defaults: any engineer can answer 'is my service healthy' in two clicks without writing a query. Fourth, a paved-road service template — a single command that scaffolds a new service with logging, metrics, deploys, and on-call already wired. I'd staff a dedicated platform team of 4 to 6 engineers reporting to a manager whose only metric is product team velocity. Treating platform as a side-project of senior ICs is the failure mode I see most often.

Project management Q&A

Scope, deadlines, and stakeholder communication. Interviewers are listening for written artifacts and structured trade-off conversations, not heroics.

How do you handle scope creep mid-project?

Scope creep is usually a symptom of a missing decision, not of a greedy stakeholder. When new requirements arrive, my first move is to surface what changed in the underlying assumption — did the customer signal shift, did a constraint get discovered, did leadership reprioritize. Once that's named, the conversation becomes about trade-offs rather than additions. I bring a written ledger to the conversation: current scope, current ship date, proposed addition, and three options — descope something else, push the date, or reduce quality on a specific axis (test coverage, doc completeness, edge case handling). The stakeholder picks. The pattern I refuse is silent absorption — if my team takes on a 20 percent scope addition without changing the ship date or the descope list, the team learns that overtime is the default, and I lose them within two cycles.

How do you handle a hard deadline you don't believe you can hit?

I name it early, in writing, with reasoning. The worst version of this is the manager who nods through the planning meeting and then misses the date — that destroys credibility for every future estimate. As soon as I see the gap, I write a one-pager: target date, current trajectory, gap in person-weeks, and three options. Option one: ship the full scope on a later date with specifics. Option two: ship a smaller scope on the original date with specifics about what's deferred. Option three: add resources, with an honest take on how much new headcount actually accelerates a mid-project team versus just adding coordination cost. Then I take it to my skip-level and the cross-functional partners, not to ask them to solve it but to make the trade-off visible. The decision is theirs; the clarity is mine to provide. Nine times out of ten the conversation reveals the deadline was less hard than it looked, and we land on option two.

Walk me through how you communicate project status to executives.

Executives don't want a status report — they want a decision request or a confidence signal, both of which fit in three sentences. I structure every update around the same skeleton: where we are against the goal, what changed since last update, and what I need from them, if anything. I use a red-yellow-green signal but I never go straight from green to red — if I'm trending toward red, I move to yellow with a specific reason and a recovery plan in the same message. The mistake new managers make is over-communicating noise (every standup blocker, every PR review delay) which trains executives to skim. I'd rather send four updates a quarter that get fully read than 40 that get ignored. When something is genuinely on fire, I escalate with proposed paths, not just the problem.

Strategic & executive Q&A

Senior EM and director-track loops add this layer. The rubric becomes vision clarity, organizational judgment, and outcome-driven planning.

How do you set technical vision for your team?

Technical vision is downstream of business reality, not upstream. I anchor on a 12-to-18-month forward question: what will we be able to do in 18 months that we can't do today, and what foundational shifts are required to get there. I write that as a one-page document — not a slide deck — that names the three or four highest-leverage architectural bets and what we're explicitly not doing. I share it first with senior ICs for technical critique, then with my peers for cross-team alignment, then with the team. The document is a living artifact; I revisit it quarterly and prune ruthlessly. The failure mode I avoid is vision-by-list — eight goals all marked critical, none with sequencing. A real vision says no to seven things so the team can say yes to one.

How do you think about organizational structure as a team grows?

I try to delay reorgs as long as possible because they're more expensive than they look — every reorg pauses real work for two to four weeks while reporting lines reset. My rule of thumb is to split a team when three signals stack: standups exceed 25 minutes, two distinct roadmaps emerge inside one team, and on-call burden is unevenly distributed by surface. When I split, I split along durable seams — domain ownership, customer surface — not along headcount targets. I also resist the temptation to mirror the org chart to the codebase; sometimes the right answer is to keep one team owning two services and split a different way. The goal is alignment between the way the team communicates and the way the system is structured, but the system shape leads, not the headcount math.

How do you set and run OKRs for an engineering team?

OKRs work when they force trade-offs and fail when they're just a status report dressed up. My team runs three objectives per quarter, no more, with two to three key results each. Every key result is a number that moves — latency, adoption, reliability, ship velocity — not a binary launch checkbox. I separate launch milestones into a roadmap document; OKRs are reserved for outcomes the launches are supposed to produce. Mid-quarter, I run a half-hour pulse: are we on track, what changed, do we want to swap a key result. Most quarters one key result gets retired or replaced, and that's healthy — pretending the world didn't change so the OKR doc looks consistent is dishonest. At end-of-quarter I score each key result with reasoning, and I score 0.7 as a successful aim. If we're hitting 1.0 every quarter, the targets weren't ambitious. If we're hitting 0.3, we're either overcommitting or not learning.

Notes for first-time EM interviews

If this is your first manager loop, three habits do most of the work:

  • Bring written artifacts. In behavioral rounds, candidates who reference a doc they wrote, a rubric they used, or a one-pager they shipped score consistently higher than candidates who only narrate. Prep three real artifacts you can describe by name.
  • Name trade-offs out loud. The single highest-frequency feedback in EM debriefs is 'candidate did not surface the trade-off.' If you took an action, name the alternative you rejected and why.
  • Talk like an owner, not a narrator. First-time EMs often describe what 'the team' did. Interviewers want to know what you decided, what you wrote, what you said in the room. Use 'I' for decisions and 'we' for outcomes.

Practice EM rounds with PhantomCode Interview Copilot

The hard rounds in an EM loop are not the ones you can drill on paper — they're the ones where the interviewer pushes back on your performance review story, asks the second follow-up on a system design trade-off, or escalates the stakeholder roleplay. Run live mock rounds with structured feedback on the actual question types above.

Try Interview Copilot

Related: Software Engineer Q&A · Data Scientist Q&A · Behavioral round · System design round