Pertanyaan Wawancara Backend Node.js: Panduan Lengkap 2026

25 pertanyaan wawancara backend Node.js yang paling sering ditanyakan. Event loop, async/await, streams, clustering, dan performa dijelaskan dengan jawaban terperinci.

Pertanyaan Wawancara Backend Node.js - Panduan Lengkap

Wawancara backend Node.js menguji pemahaman tentang internal runtime, penguasaan pola asinkron, dan kemampuan merancang aplikasi yang berkinerja tinggi. Panduan ini membahas pertanyaan yang paling sering diajukan, mulai dari dasar hingga konsep produksi tingkat lanjut.

Tips Wawancara

Perekrut menghargai jawaban yang menggabungkan teori dengan contoh praktis. Untuk setiap pertanyaan, memberikan ilustrasi kode atau kasus penggunaan konkret menunjukkan pengalaman di dunia nyata.

Dasar-Dasar Node.js

Pertanyaan 1: Apa itu Event Loop dan bagaimana cara kerjanya?

Event Loop adalah mekanisme inti yang memungkinkan Node.js menangani operasi asinkron secara non-blocking meskipun berjalan pada thread tunggal. Mekanisme ini mengatur eksekusi kode JavaScript, callback, dan event.

event-loop-demo.jsjavascript
// Demonstration of Event Loop execution order

console.log('1. Script start (synchronous)');

// setTimeout goes to the Timer Queue
setTimeout(() => {
  console.log('5. setTimeout callback (Timer Queue)');
}, 0);

// setImmediate goes to the Check Queue
setImmediate(() => {
  console.log('6. setImmediate callback (Check Queue)');
});

// Promise goes to the Microtask Queue (priority)
Promise.resolve().then(() => {
  console.log('3. Promise.then (Microtask Queue)');
});

// process.nextTick has the highest priority
process.nextTick(() => {
  console.log('2. process.nextTick (nextTick Queue)');
});

console.log('4. Script end (synchronous)');

// Output order: 1, 4, 2, 3, 5, 6

Event Loop mengikuti urutan yang tepat saat memproses antrian: pertama kode sinkron, lalu nextTick, microtask (Promise), timer, callback I/O, setImmediate, dan terakhir callback close.

Pertanyaan 2: Apa perbedaan antara process.nextTick() dan setImmediate()?

Pertanyaan ini menguji pemahaman detail tentang prioritas eksekusi dalam Event Loop.

nextTick-vs-immediate.jsjavascript
// Behavior comparison

// process.nextTick executes BEFORE the next Event Loop phase
process.nextTick(() => {
  console.log('nextTick 1');
  process.nextTick(() => {
    console.log('nextTick 2 (nested)');
  });
});

// setImmediate executes in the Check phase of the Event Loop
setImmediate(() => {
  console.log('setImmediate 1');
  setImmediate(() => {
    console.log('setImmediate 2 (nested)');
  });
});

// Output: nextTick 1, nextTick 2, setImmediate 1, setImmediate 2

process.nextTick() diproses segera setelah operasi saat ini, sebelum Event Loop melanjutkan. Penggunaan berlebihan dapat memblokir Event Loop. setImmediate() lebih dapat diprediksi dan direkomendasikan untuk menunda eksekusi.

Waspadai Starvation

Pemanggilan rekursif process.nextTick() dapat membuat Event Loop kelaparan dan mencegah pemrosesan I/O. Gunakan setImmediate() untuk operasi yang tidak kritis.

Pertanyaan 3: Bagaimana Node.js menangani error dalam kode asinkron?

Penanganan error asinkron berbeda secara fundamental dari kode sinkron. Tanpa penanganan yang tepat, sebuah error dapat menyebabkan aplikasi crash.

error-handling.jsjavascript
// Asynchronous error handling patterns

// Pattern 1: Callbacks with error-first convention
function readFileCallback(path, callback) {
  const fs = require('fs');
  fs.readFile(path, 'utf8', (err, data) => {
    if (err) {
      // Error is ALWAYS the first argument
      return callback(err, null);
    }
    callback(null, data);
  });
}

// Pattern 2: Promises with catch
async function readFilePromise(path) {
  const fs = require('fs').promises;
  try {
    const data = await fs.readFile(path, 'utf8');
    return data;
  } catch (err) {
    // Centralized error handling
    console.error(`File read error: ${err.message}`);
    throw err; // Re-throw for propagation
  }
}

// Pattern 3: Global handling of unhandled rejections
process.on('unhandledRejection', (reason, promise) => {
  console.error('Unhandled rejection:', reason);
  // In production: log and graceful shutdown
});

// Pattern 4: Handling uncaught exceptions
process.on('uncaughtException', (err) => {
  console.error('Uncaught exception:', err);
  // CRITICAL: always terminate the process after
  process.exit(1);
});

Di lingkungan produksi, setiap Promise harus memiliki .catch() atau berada dalam blok try/catch. Handler global berfungsi sebagai jaring pengaman, bukan solusi utama.

Pemrograman Asinkron dan Konkurensi

Pertanyaan 4: Jelaskan perbedaan antara paralelisme dan konkurensi di Node.js

Node.js bersifat konkuren tetapi tidak paralel secara default. Perbedaan ini sangat fundamental untuk memahami performa.

concurrency-vs-parallelism.jsjavascript
// CONCURRENCY: multiple tasks progress by alternating (single-thread)
async function concurrentTasks() {
  console.time('concurrent');

  // These calls are concurrent, not parallel
  const results = await Promise.all([
    fetch('https://api.example.com/users'),    // Non-blocking I/O
    fetch('https://api.example.com/products'), // Non-blocking I/O
    fetch('https://api.example.com/orders'),   // Non-blocking I/O
  ]);

  console.timeEnd('concurrent'); // ~time of the longest request
  return results;
}

// PARALLELISM: with Worker Threads for CPU-bound tasks
const { Worker, isMainThread, parentPort } = require('worker_threads');

if (isMainThread) {
  // Main thread delegates CPU-intensive work
  async function parallelComputation() {
    console.time('parallel');

    const workers = [
      createWorker({ start: 0, end: 1000000 }),
      createWorker({ start: 1000000, end: 2000000 }),
      createWorker({ start: 2000000, end: 3000000 }),
    ];

    const results = await Promise.all(workers);
    console.timeEnd('parallel');
    return results.reduce((a, b) => a + b, 0);
  }

  function createWorker(data) {
    return new Promise((resolve, reject) => {
      const worker = new Worker(__filename, { workerData: data });
      worker.on('message', resolve);
      worker.on('error', reject);
    });
  }
} else {
  // Code executed in the Worker Thread
  const { workerData } = require('worker_threads');
  let sum = 0;
  for (let i = workerData.start; i < workerData.end; i++) {
    sum += Math.sqrt(i); // CPU-intensive calculation
  }
  parentPort.postMessage(sum);
}

Untuk operasi yang terikat I/O (jaringan, file), konkurensi bawaan sudah cukup. Untuk tugas yang terikat CPU (perhitungan berat, kriptografi), Worker Threads memungkinkan paralelisme sejati.

Pertanyaan 5: Bagaimana modul Cluster bekerja?

Modul Cluster memungkinkan pembuatan beberapa proses Node.js yang berbagi port yang sama, sehingga memanfaatkan semua core CPU yang tersedia.

cluster-example.jsjavascript
const cluster = require('cluster');
const http = require('http');
const numCPUs = require('os').cpus().length;

if (cluster.isPrimary) {
  console.log(`Primary ${process.pid} is running`);
  console.log(`Forking ${numCPUs} workers...`);

  // Fork one worker per CPU core
  for (let i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  // Handle crashing workers
  cluster.on('exit', (worker, code, signal) => {
    console.log(`Worker ${worker.process.pid} died (${signal || code})`);
    console.log('Starting a new worker...');
    cluster.fork(); // Automatic restart
  });

  // Inter-process communication
  cluster.on('message', (worker, message) => {
    console.log(`Message from worker ${worker.id}:`, message);
  });

} else {
  // Workers share the TCP port
  http.createServer((req, res) => {
    res.writeHead(200);
    res.end(`Handled by worker ${process.pid}\n`);

    // Send stats to primary
    process.send({ type: 'request', pid: process.pid });
  }).listen(8000);

  console.log(`Worker ${process.pid} started`);
}

Load balancing ditangani secara otomatis oleh sistem operasi (round-robin pada Linux/macOS). Di lingkungan produksi, PM2 menyederhanakan pengelolaan ini dengan mode cluster bawaan.

Siap menguasai wawancara Node.js / NestJS Anda?

Berlatih dengan simulator interaktif, flashcards, dan tes teknis kami.

Streams dan Buffers

Pertanyaan 6: Kapan sebaiknya Streams digunakan daripada metode klasik?

Streams memungkinkan pemrosesan data secara bertahap (chunk) alih-alih memuat semuanya ke dalam memori. Sangat penting untuk file besar dan skenario streaming.

streams-comparison.jsjavascript
const fs = require('fs');

// ❌ BAD: loads entire file into memory
async function readEntireFile(path) {
  const data = await fs.promises.readFile(path); // Blocks if file > RAM
  return processData(data);
}

// ✅ GOOD: chunk-based processing with Stream
function readWithStream(path) {
  return new Promise((resolve, reject) => {
    const chunks = [];
    const readStream = fs.createReadStream(path, {
      highWaterMark: 64 * 1024, // 64KB per chunk
    });

    readStream.on('data', (chunk) => {
      // Progressive processing, constant memory
      chunks.push(processChunk(chunk));
    });

    readStream.on('end', () => resolve(chunks));
    readStream.on('error', reject);
  });
}

// ✅ BEST: pipeline for chaining transformations
const { pipeline } = require('stream/promises');
const zlib = require('zlib');

async function compressFile(input, output) {
  await pipeline(
    fs.createReadStream(input),   // Source
    zlib.createGzip(),            // Transform
    fs.createWriteStream(output)  // Destination
  );
  // Automatic error handling and backpressure management
}

Gunakan Streams kapan pun ukuran data bisa melebihi beberapa MB, atau untuk pemrosesan real-time (upload, log, data jaringan).

Pertanyaan 7: Jelaskan konsep backpressure

Backpressure terjadi ketika produsen data lebih cepat daripada konsumen. Tanpa pengelolaan yang tepat, memori akan meledak.

backpressure-demo.jsjavascript
const fs = require('fs');

// ❌ Problem: no backpressure handling
function badCopy(src, dest) {
  const readable = fs.createReadStream(src);
  const writable = fs.createWriteStream(dest);

  readable.on('data', (chunk) => {
    // If write() returns false, the internal buffer is full
    // But here reading continues anyway → memory leak
    writable.write(chunk);
  });
}

// ✅ Solution: respect the writable signal
function goodCopy(src, dest) {
  const readable = fs.createReadStream(src);
  const writable = fs.createWriteStream(dest);

  readable.on('data', (chunk) => {
    const canContinue = writable.write(chunk);

    if (!canContinue) {
      // Pause reading until buffer drains
      readable.pause();
    }
  });

  writable.on('drain', () => {
    // Buffer drained, resume reading
    readable.resume();
  });

  readable.on('end', () => writable.end());
}

// ✅ BEST: pipe() handles everything automatically
function bestCopy(src, dest) {
  const readable = fs.createReadStream(src);
  const writable = fs.createWriteStream(dest);

  // pipe() handles backpressure natively
  readable.pipe(writable);
}

Metode pipe() atau pipeline() menangani backpressure secara otomatis. Untuk kasus yang kompleks, logika pause/resume perlu diimplementasikan secara manual.

Performa dan Optimasi

Pertanyaan 8: Bagaimana cara mengidentifikasi dan memperbaiki memory leak?

Memory leak adalah masalah umum di Node.js. Mengetahui cara mendeteksi dan memperbaikinya sangat penting di lingkungan produksi.

memory-leak-patterns.jsjavascript
// ❌ Leak 1: closures that retain references
function createLeakyHandler() {
  const hugeData = Buffer.alloc(100 * 1024 * 1024); // 100MB

  return function handler(req, res) {
    // hugeData remains in memory as long as handler exists
    res.end('Hello');
  };
}

// ✅ Fix: limit the scope
function createSafeHandler() {
  return function handler(req, res) {
    // Data created and released on each request
    const data = fetchData();
    res.end(data);
  };
}

// ❌ Leak 2: event listeners not cleaned up
class LeakyClass {
  constructor() {
    // Added on each instantiation, never removed
    process.on('message', this.handleMessage);
  }
  handleMessage(msg) { /* ... */ }
}

// ✅ Fix: explicit cleanup
class SafeClass {
  constructor() {
    this.boundHandler = this.handleMessage.bind(this);
    process.on('message', this.boundHandler);
  }
  handleMessage(msg) { /* ... */ }

  destroy() {
    // Mandatory cleanup
    process.removeListener('message', this.boundHandler);
  }
}

// Diagnostics with native tools
function diagnoseMemory() {
  const used = process.memoryUsage();
  console.log({
    heapUsed: `${Math.round(used.heapUsed / 1024 / 1024)}MB`,
    heapTotal: `${Math.round(used.heapTotal / 1024 / 1024)}MB`,
    external: `${Math.round(used.external / 1024 / 1024)}MB`,
    rss: `${Math.round(used.rss / 1024 / 1024)}MB`,
  });
}

// Enable manual garbage collector for testing
// node --expose-gc app.js
if (global.gc) {
  global.gc();
  diagnoseMemory();
}

Di lingkungan produksi, gunakan alat seperti clinic.js, heap snapshot dari Chrome DevTools, atau solusi APM (Application Performance Monitoring) seperti DataDog atau New Relic.

Pertanyaan 9: Bagaimana cara mengoptimalkan performa API Node.js?

Pertanyaan ini menguji pengetahuan tentang teknik optimasi di berbagai level.

performance-optimization.jsjavascript
// 1. CACHING: reduce expensive calls
const NodeCache = require('node-cache');
const cache = new NodeCache({ stdTTL: 300 }); // 5-minute TTL

async function getCachedUser(id) {
  const cacheKey = `user:${id}`;
  let user = cache.get(cacheKey);

  if (!user) {
    user = await db.users.findById(id);
    cache.set(cacheKey, user);
  }

  return user;
}

// 2. CONNECTION POOLING: reuse DB connections
const { Pool } = require('pg');
const pool = new Pool({
  max: 20,                // Max simultaneous connections
  idleTimeoutMillis: 30000,
  connectionTimeoutMillis: 2000,
});

// 3. COMPRESSION: reduce response size
const compression = require('compression');
app.use(compression({
  filter: (req, res) => {
    // Only compress if > 1KB
    return compression.filter(req, res);
  },
  threshold: 1024,
}));

// 4. BATCHING: group operations
async function batchInsert(items) {
  const BATCH_SIZE = 1000;

  for (let i = 0; i < items.length; i += BATCH_SIZE) {
    const batch = items.slice(i, i + BATCH_SIZE);
    await db.items.insertMany(batch);
  }
}

// 5. LAZY LOADING: load on demand
async function getUserWithPosts(userId, includePosts = false) {
  const user = await db.users.findById(userId);

  if (includePosts) {
    user.posts = await db.posts.findByUserId(userId);
  }

  return user;
}

Optimasi harus dipandu oleh profiling. Ukur terlebih dahulu sebelum mengoptimalkan untuk mengidentifikasi bottleneck yang sebenarnya.

Aturan 80/20

80% masalah performa berasal dari 20% kode. Gunakan profiling untuk mengidentifikasi area kritis ini sebelum mengoptimalkan secara membabi buta.

Keamanan

Pertanyaan 10: Bagaimana cara melindungi API Node.js dari serangan umum?

Keamanan adalah topik wawancara yang sering muncul. Menunjukkan pengetahuan tentang kerentanan OWASP merupakan hal yang diharapkan.

security-best-practices.jsjavascript
const express = require('express');
const helmet = require('helmet');
const rateLimit = require('express-rate-limit');
const mongoSanitize = require('express-mongo-sanitize');
const xss = require('xss-clean');

const app = express();

// 1. SECURITY HEADERS with Helmet
app.use(helmet());

// 2. RATE LIMITING against brute-force attacks
const limiter = rateLimit({
  windowMs: 15 * 60 * 1000, // 15 minutes
  max: 100,                  // 100 requests per IP
  message: 'Too many requests, please try again later',
  standardHeaders: true,
  legacyHeaders: false,
});
app.use('/api/', limiter);

// 3. SANITIZATION against NoSQL injections
app.use(mongoSanitize());

// 4. XSS PROTECTION
app.use(xss());

// 5. STRICT INPUT VALIDATION
const { body, validationResult } = require('express-validator');

app.post('/api/users',
  [
    body('email').isEmail().normalizeEmail(),
    body('password').isLength({ min: 8 }).escape(),
    body('name').trim().escape(),
  ],
  (req, res) => {
    const errors = validationResult(req);
    if (!errors.isEmpty()) {
      return res.status(400).json({ errors: errors.array() });
    }
    // Continue processing
  }
);

// 6. SQL INJECTION PROTECTION (with parameters)
async function safeQuery(userId) {
  // ✅ Parameterized query
  const result = await pool.query(
    'SELECT * FROM users WHERE id = $1',
    [userId]
  );
  return result.rows;
}

// ❌ NEVER string concatenation
async function unsafeQuery(userId) {
  // Vulnerable to SQL injection
  const result = await pool.query(
    `SELECT * FROM users WHERE id = ${userId}`
  );
}

Di lingkungan produksi, tambahkan juga: CORS yang ketat, HTTPS wajib, logging keamanan, rotasi rahasia, dan audit dependensi secara berkala (npm audit).

Arsitektur dan Pola Desain

Pertanyaan 11: Jelaskan pola Repository di Node.js

Pola Repository mengabstraksi akses data dan memudahkan pengujian serta pemeliharaan.

repository-pattern.jsjavascript
// Abstract interface (for TypeScript, or documentation)
class UserRepository {
  async findById(id) { throw new Error('Not implemented'); }
  async findByEmail(email) { throw new Error('Not implemented'); }
  async create(userData) { throw new Error('Not implemented'); }
  async update(id, userData) { throw new Error('Not implemented'); }
  async delete(id) { throw new Error('Not implemented'); }
}

// Concrete implementation with Prisma
class PrismaUserRepository extends UserRepository {
  constructor(prisma) {
    super();
    this.prisma = prisma;
  }

  async findById(id) {
    return this.prisma.user.findUnique({ where: { id } });
  }

  async findByEmail(email) {
    return this.prisma.user.findUnique({ where: { email } });
  }

  async create(userData) {
    return this.prisma.user.create({ data: userData });
  }

  async update(id, userData) {
    return this.prisma.user.update({
      where: { id },
      data: userData,
    });
  }

  async delete(id) {
    return this.prisma.user.delete({ where: { id } });
  }
}

// Implementation for testing
class InMemoryUserRepository extends UserRepository {
  constructor() {
    super();
    this.users = new Map();
    this.idCounter = 1;
  }

  async findById(id) {
    return this.users.get(id) || null;
  }

  async create(userData) {
    const user = { id: this.idCounter++, ...userData };
    this.users.set(user.id, user);
    return user;
  }
  // ... other methods
}

// Service using the repository (dependency injection)
class UserService {
  constructor(userRepository) {
    this.userRepository = userRepository;
  }

  async getUser(id) {
    const user = await this.userRepository.findById(id);
    if (!user) throw new Error('User not found');
    return user;
  }
}

Pola ini memungkinkan perubahan implementasi persistensi tanpa mengubah logika bisnis.

Pertanyaan 12: Bagaimana cara mengimplementasikan sistem antrian tugas?

Antrian memungkinkan penundaan tugas berat dan memastikan eksekusi yang andal.

job-queue.jsjavascript
const Queue = require('bull');

// Create queue with Redis as backend
const emailQueue = new Queue('email', {
  redis: {
    host: 'localhost',
    port: 6379,
  },
  defaultJobOptions: {
    attempts: 3,          // Number of attempts
    backoff: {
      type: 'exponential',
      delay: 2000,        // Initial delay between attempts
    },
    removeOnComplete: 100, // Keep last 100 completed jobs
  },
});

// Producer: add jobs to the queue
async function sendWelcomeEmail(userId, email) {
  await emailQueue.add('welcome', {
    userId,
    email,
    template: 'welcome',
  }, {
    priority: 1,          // High priority
    delay: 5000,          // 5-second delay
  });
}

// Consumer: process jobs
emailQueue.process('welcome', async (job) => {
  const { userId, email, template } = job.data;

  // Update progress
  job.progress(10);

  const html = await renderTemplate(template, { userId });
  job.progress(50);

  await sendEmail(email, 'Welcome!', html);
  job.progress(100);

  return { sent: true, email };
});

// Event handling
emailQueue.on('completed', (job, result) => {
  console.log(`Job ${job.id} completed:`, result);
});

emailQueue.on('failed', (job, err) => {
  console.error(`Job ${job.id} failed:`, err.message);
});

// Recurring jobs (cron)
emailQueue.add('newsletter', { type: 'weekly' }, {
  repeat: {
    cron: '0 9 * * MON', // Every Monday at 9am
  },
});

Bull dengan Redis adalah solusi yang paling populer. Untuk kebutuhan yang lebih sederhana, agenda atau bee-queue merupakan alternatif yang ringan.

Pertanyaan Tingkat Lanjut

Pertanyaan 13: Bagaimana modul native N-API bekerja?

N-API memungkinkan pembuatan modul native dalam C/C++ dengan API yang stabil di seluruh versi Node.js.

native-module.cppcpp
// Native module for CPU-intensive calculations

#include <napi.h>

// Synchronous function exposed to JavaScript
Napi::Number Fibonacci(const Napi::CallbackInfo& info) {
  Napi::Env env = info.Env();

  // Argument validation
  if (info.Length() < 1 || !info[0].IsNumber()) {
    Napi::TypeError::New(env, "Number expected")
      .ThrowAsJavaScriptException();
    return Napi::Number::New(env, 0);
  }

  int n = info[0].As<Napi::Number>().Int32Value();

  // Iterative Fibonacci calculation
  long long a = 0, b = 1;
  for (int i = 0; i < n; i++) {
    long long temp = a + b;
    a = b;
    b = temp;
  }

  return Napi::Number::New(env, static_cast<double>(a));
}

// Module initialization
Napi::Object Init(Napi::Env env, Napi::Object exports) {
  exports.Set(
    Napi::String::New(env, "fibonacci"),
    Napi::Function::New(env, Fibonacci)
  );
  return exports;
}

NODE_API_MODULE(native_module, Init)
javascript
// Usage from JavaScript
const native = require('./build/Release/native_module');

// 10x faster than JavaScript equivalent
const result = native.fibonacci(50);

Modul native berguna untuk perhitungan intensif, mengintegrasikan pustaka C/C++ yang sudah ada, atau mengakses API sistem.

Pertanyaan 14: Jelaskan Garbage Collector V8

Memahami GC membantu menulis kode yang meminimalkan jeda dan konsumsi memori.

gc-optimization.jsjavascript
// V8 GC uses two spaces: Young and Old Generation

// 1. Young Generation: short-lived objects
function shortLivedObjects() {
  for (let i = 0; i < 1000; i++) {
    const temp = { data: i }; // Allocated then collected quickly
  }
  // Minor GC (Scavenge) very fast
}

// 2. Old Generation: objects that survive multiple GCs
const cache = new Map(); // Survives, promoted to Old Generation

// ❌ Problematic pattern: many promoted objects
function createManyLongLived() {
  const objects = [];
  for (let i = 0; i < 100000; i++) {
    objects.push({ id: i, data: new Array(100).fill(0) });
  }
  return objects; // All promoted to Old Gen = slow major GC
}

// ✅ Optimized pattern: object reuse
class ObjectPool {
  constructor(factory, size = 100) {
    this.pool = Array.from({ length: size }, factory);
    this.available = [...this.pool];
  }

  acquire() {
    return this.available.pop() || this.pool[0];
  }

  release(obj) {
    // Reset and return to pool
    Object.keys(obj).forEach(k => obj[k] = null);
    this.available.push(obj);
  }
}

// GC monitoring
const v8 = require('v8');

function getHeapStats() {
  const stats = v8.getHeapStatistics();
  return {
    totalHeap: `${Math.round(stats.total_heap_size / 1024 / 1024)}MB`,
    usedHeap: `${Math.round(stats.used_heap_size / 1024 / 1024)}MB`,
    heapLimit: `${Math.round(stats.heap_size_limit / 1024 / 1024)}MB`,
  };
}

Flag --max-old-space-size memungkinkan peningkatan batas Old Generation untuk aplikasi yang membutuhkan banyak memori.

Pertanyaan 15: Bagaimana cara mengimplementasikan graceful shutdown?

Graceful shutdown memungkinkan penyelesaian permintaan yang sedang berlangsung dan penutupan koneksi dengan benar sebelum menghentikan server.

graceful-shutdown.jsjavascript
const http = require('http');

const server = http.createServer((req, res) => {
  // Simulate a long request
  setTimeout(() => {
    res.writeHead(200);
    res.end('Done');
  }, 2000);
});

// Tracking active connections
let connections = new Set();

server.on('connection', (conn) => {
  connections.add(conn);
  conn.on('close', () => connections.delete(conn));
});

// Graceful shutdown function
async function shutdown(signal) {
  console.log(`${signal} received, starting graceful shutdown...`);

  // 1. Stop accepting new connections
  server.close(() => {
    console.log('HTTP server closed');
  });

  // 2. Close idle connections
  for (const conn of connections) {
    conn.end();
  }

  // 3. Close DB connections, queues, etc.
  await Promise.all([
    database.disconnect(),
    redisClient.quit(),
    messageQueue.close(),
  ]);

  // 4. Safety timeout
  setTimeout(() => {
    console.error('Forced shutdown after timeout');
    process.exit(1);
  }, 30000);

  console.log('Graceful shutdown completed');
  process.exit(0);
}

// Listen for termination signals
process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));

// Start server
server.listen(3000, () => {
  console.log('Server running on port 3000');
});

Di lingkungan produksi dengan kontainer (Docker, Kubernetes), graceful shutdown sangat penting untuk deployment tanpa downtime.

Siap menguasai wawancara Node.js / NestJS Anda?

Berlatih dengan simulator interaktif, flashcards, dan tes teknis kami.

Pertanyaan Perilaku

Pertanyaan 16: Ceritakan masalah performa yang pernah diselesaikan

Pertanyaan ini mengevaluasi pengalaman praktis. Jawaban sebaiknya disusun menggunakan format STAR (Situation, Task, Action, Result).

Contoh jawaban terstruktur:

text
Situation: A reporting API was timing out on requests
            exceeding 100,000 records.

Task:       Reduce response time from 45s to under 5s.

Action:
1. Profiling with clinic.js → identified JSON serialization as bottleneck
2. Implemented streaming with Transform streams
3. Database-side pagination
4. Added Redis caching for frequent queries

Result:     Response time reduced to 2s, memory usage decreased by 10x.

Pertanyaan 17: Bagaimana cara mengelola dependensi dan pembaruannya?

package.json - Versioning best practicesjavascript
{
  "dependencies": {
    // ✅ Exact versions for production
    "express": "4.18.2",

    // ✅ Caret for compatible minor updates
    "lodash": "^4.17.21",

    // ❌ Avoid latest or *
    // "some-lib": "*"
  },
  "devDependencies": {
    // Quality tools
    "npm-check-updates": "^16.0.0"
  },
  "scripts": {
    // Vulnerability check
    "audit": "npm audit --audit-level=moderate",

    // Interactive update
    "update:check": "ncu",
    "update:apply": "ncu -u && npm install"
  },
  "engines": {
    // Specify required Node.js version
    "node": ">=20.0.0"
  }
}

Perlu disebutkan penggunaan package-lock.json, Dependabot atau Renovate untuk otomasi, dan pengujian regresi sebelum setiap pembaruan mayor.

Kesimpulan

Wawancara backend Node.js mengevaluasi pemahaman teoritis tentang mekanisme internal dan kemampuan memecahkan masalah produksi secara praktis. Menguasai Event Loop, pola asinkron, dan teknik optimasi merupakan fondasi yang diharapkan untuk posisi backend developer senior.

Daftar Periksa Persiapan

  • ✅ Memahami cara kerja Event Loop dan fase-fasenya
  • ✅ Menguasai perbedaan antara callback, Promise, dan async/await
  • ✅ Mengetahui pola penanganan error asinkron
  • ✅ Mengetahui kapan menggunakan Streams vs metode klasik
  • ✅ Mengidentifikasi dan memperbaiki memory leak
  • ✅ Menerapkan praktik terbaik keamanan OWASP
  • ✅ Mengimplementasikan clustering dan graceful shutdown
  • ✅ Menggunakan alat profiling (clinic.js, Chrome DevTools)

Mulai berlatih!

Uji pengetahuan Anda dengan simulator wawancara dan tes teknis kami.

Persiapan teknis sebaiknya dilengkapi dengan proyek praktis. Membangun API produksi, berkontribusi pada proyek open source Node.js, atau memecahkan tantangan di platform seperti LeetCode membantu mengokohkan pengetahuan ini.

Tag

#nodejs
#interview
#backend
#javascript
#technical interview

Bagikan

Artikel terkait