advanced
nodejs
libuv
async
performance
· 18 min

Walk me through the Node.js event loop phases

TL;DR
Before the loop begins, Node runs a one-time bootstrap: your top-level script executes synchronously, then process.nextTick and Promise microtask queues drain. Only then does the loop enter its first phase. Each iteration then cycles through six phases — timers → pending → idle/prepare → poll → check → close — with both microtask queues draining between every phase.

Before the loop starts — the bootstrap sequence

The event loop does not run first. When Node starts, it goes through a one-shot bootstrap before entering any phase. Interviewers probe this because it explains why top-level Promises and nextTicks fire before any setTimeout(0).

  1. Node initializes its runtime — V8 isolate, libuv loop handle, built-in modules, internal bindings.
  2. Preload modules run — anything loaded via --require (CJS) or --import (ESM), then the entry module.
  3. Your top-level sync script executes on the call stack. Every setTimeout / setImmediate / fs.readFile / Promise / nextTick encountered at this stage is registered into its target queue — but nothing fires yet.
  4. process.nextTick queue drains fully. Callbacks scheduled during module evaluation run now.
  5. Promise microtask queue drains fully. This is where top-level Promise.then and top-level await resumptions fire.
  6. The loop enters its first iteration, starting at the Timers phase.
setTimeout(() => console.log('T'), 0);
Promise.resolve().then(() => console.log('P'));
process.nextTick(() => console.log('N'));
console.log('S');

// Output order: S, N, P, T
S runs inside the top-level script. N fires during bootstrap's nextTick drain. P fires during bootstrap's microtask drain. T fires in the first Timers phase AFTER bootstrap finishes.

The bootstrap is not a libuv phase. It's a separate runtime stage — V8 evaluating your module plus Node's microtask drains. In the simulation it's shown as a dashed box above the loop to reinforce this.

The six phases (in order, per iteration)

Each iteration of the loop (a 'tick') walks through these six phases. A phase has its own FIFO callback queue. The loop enters a phase, runs its callbacks until the queue is empty or a hard cap is reached, then moves on.

1. Timers

Runs callbacks scheduled by setTimeout() and setInterval() whose threshold has elapsed. The threshold is a *minimum*, not a guarantee — if the poll phase takes 40ms, a setTimeout(fn, 10) will fire after 40ms.

2. Pending callbacks

Runs some system operation callbacks deferred from the previous tick — e.g., a TCP error like ECONNREFUSED on some platforms. You rarely interact with this phase directly.

3. Idle, prepare

Internal only. Libuv housekeeping. Not observable from user code.

4. Poll

The workhorse phase. Two jobs: (1) calculate how long to block for I/O, (2) process I/O callbacks (fs, net, http response, etc.). If the poll queue is empty and there are pending setImmediate callbacks, the loop exits poll and goes to check. If no setImmediate is queued, it may *block* here waiting for incoming I/O — bounded by the nearest timer threshold.

5. Check

Runs setImmediate() callbacks. By design, setImmediate is guaranteed to run immediately after the poll phase — it's the safe way to schedule 'do this right after my I/O completes'.

6. Close callbacks

Runs 'close' event callbacks (e.g., socket.on('close', ...)). Then the tick ends and the loop returns to timers.

Between every phase: two microtask queues drain

  1. process.nextTick queue — drained first, fully, before anything else.
  2. Promise microtask queue — drained second, fully (then.catch.finally, queueMicrotask, await resumptions).

Which API goes where — cheat sheet

  • setTimeout / setInterval → Timers
  • setImmediate → Check
  • fs.* callbacks, net/http server handlers → Poll (I/O)
  • process.nextTick → nextTick queue (between phases)
  • Promise.then / await resumption → Promise microtask queue (between phases)
  • socket.on('close'), stream 'close' → Close callbacks

The classic trick question

setTimeout(() => console.log('timeout'), 0);
setImmediate(() => console.log('immediate'));
What prints first? It's nondeterministic at the top level.

At the top of a program, the order depends on how long process startup takes. If the ms elapsed before entering timers is ≥ 1, setTimeout fires first; otherwise setImmediate does. BUT inside an I/O callback, the answer is always: setImmediate first. Why? You're already inside poll; when poll ends, check runs before we loop back to timers.

require('fs').readFile(__filename, () => {
  setTimeout(() => console.log('timeout'), 0);
  setImmediate(() => console.log('immediate'));
});
Inside an I/O callback: 'immediate' always prints first.

Step through the Node.js phases

Schedule callbacks via different APIs and watch them drop into the correct phase. Tick the loop to walk through each phase — microtasks drain between them.

Interactive
Bootstrapruns once · not a libuv phase
Node startup — before the loop's first iteration
  1. 1.Top-level sync script executes on the call stack (schedulers register into their queues).
  2. 2.process.nextTick queue drains — everything scheduled during the script runs now.
  3. 3.Promise microtask queue drains — top-level .then and await resumptions fire here.
  4. 4.Loop enters its first iteration at Timers.
↓ loop begins · first iteration starts below
Timers
setTimeout · setInterval
setTimeoutsetInterval
empty
↓ drain nextTick · then Promise microtasks
Pending callbacks
some system errors (TCP)
empty
↓ drain nextTick · then Promise microtasks
Idle, prepare
internal (libuv housekeeping)
empty
↓ drain nextTick · then Promise microtasks
Poll
I/O callbacks · may block here
fs.readFilehttp request
empty
↓ drain nextTick · then Promise microtasks
Check
setImmediate
setImmediate
empty
↓ drain nextTick · then Promise microtasks
Close callbacks
socket.on('close') · etc.
socket close
empty
↩ loop back to Timers
process.nextTick queue
drains first, between every phase
(empty)
Promise microtask queue
drains second, between every phase
(empty)
Execution log
Schedule some callbacks, then step the loop…

Performance implications

  • CPU-bound work on the main thread blocks every phase. Move it to a Worker or a child process.
  • process.nextTick is a footgun for recursion — it starves I/O. Use setImmediate when you need to yield.
  • A slow promise chain inside an I/O callback blocks the rest of poll from progressing.
  • Set server timeouts (server.setTimeout) — a slow client blocks a socket and keeps poll busy.
  • If you see tail-latency spikes unrelated to load, check for large synchronous JSON.parse, bcrypt, sync fs, or sync crypto on the hot path.

Interviewer follow-ups

  • Why does process.nextTick exist separately from Promise microtasks? (historical — predates Promises; kept for priority + perf)
  • What does queueMicrotask schedule? (Promise microtask queue, same as .then)
  • What happens when an unhandled rejection fires? ('unhandledRejection' event on process, default terminates in newer Node versions)
  • How does Worker Threads interact with the loop? (each Worker has its own libuv loop, but MessagePort delivery goes through the parent's loop)