Laravel Queues and Jobs: Asynchronous Architecture and Interview Questions 2026

Deep dive into Laravel queues and jobs architecture. Covers job dispatching, batching, chaining, middleware, failed job handling, and queue worker management with Laravel 12 examples.

Laravel queues and jobs asynchronous architecture diagram with worker processes and job dispatching pipeline

Laravel queues provide a unified API for deferring time-consuming tasks — sending emails, processing uploads, generating reports — to background workers. Instead of forcing users to wait, the application pushes jobs onto a queue and moves on. This mechanism sits at the core of any scalable Laravel application.

Queue Architecture at a Glance

Laravel supports multiple queue backends (Redis, Amazon SQS, database, Beanstalkd) through a single, driver-agnostic API. Jobs are serialized PHP classes that implement the ShouldQueue interface. Workers pull jobs from the queue, deserialize them, and execute their handle() method. Failed jobs land in a dedicated failed_jobs table for retry or inspection.

How Laravel Job Dispatching Works Under the Hood

When dispatch() is called on a job class, Laravel serializes the job instance — including its public properties — and pushes the payload onto the configured queue connection. The serialized payload contains the fully qualified class name, serialized properties, the target queue name, and metadata such as the number of allowed attempts and timeout.

The queue worker process (php artisan queue:work) runs as a long-lived daemon that polls the queue backend for new jobs. Upon receiving a payload, the worker deserializes the job, resolves its dependencies through the service container, and calls handle().

App/Jobs/ProcessInvoice.phpphp
namespace App\Jobs;

use App\Models\Order;
use App\Services\PdfGenerator;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;

class ProcessInvoice implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 3;
    public int $backoff = 60;
    public int $timeout = 120;

    public function __construct(
        public readonly Order $order
    ) {}

    public function handle(PdfGenerator $pdf): void
    {
        // Generate PDF invoice for the order
        $invoice = $pdf->generate($this->order);

        // Store the generated file
        $this->order->update([
            'invoice_path' => $invoice->path(),
            'invoiced_at' => now(),
        ]);
    }

    public function failed(\Throwable $e): void
    {
        // Notify the ops team when invoice generation fails
        logger()->error('Invoice generation failed', [
            'order_id' => $this->order->id,
            'error' => $e->getMessage(),
        ]);
    }
}

The SerializesModels trait stores only the model's primary key and class name, not the entire Eloquent model. When the worker processes the job, it fetches the fresh model from the database. This avoids stale data and keeps payload sizes small.

Job Batching for Parallel Workloads

Job batching groups multiple jobs into a single batch, tracks their collective progress, and triggers callbacks when all jobs complete — or when any job fails. This pattern fits data imports, bulk notifications, and report generation where multiple independent units of work must complete before a final step runs.

App/Http/Controllers/ImportController.phpphp
use App\Jobs\ImportRow;
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;

public function import(Request $request)
{
    $rows = $this->parseCSV($request->file('data'));

    // Create a batch of import jobs, one per CSV row
    $batch = Bus::batch(
        collect($rows)->map(fn ($row) => new ImportRow($row))
    )
    ->then(function (Batch $batch) {
        // All jobs completed successfully
        Notification::send(
            auth()->user(),
            new ImportComplete($batch->totalJobs)
        );
    })
    ->catch(function (Batch $batch, \Throwable $e) {
        // First failure in the batch
        logger()->warning('Batch import partial failure', [
            'batch_id' => $batch->id,
            'failed' => $batch->failedJobs,
        ]);
    })
    ->finally(function (Batch $batch) {
        // Runs after all jobs finish (success or failure)
        Cache::forget("import_lock_{$batch->id}");
    })
    ->allowFailures()
    ->dispatch();

    return response()->json(['batch_id' => $batch->id]);
}

Laravel 12 enriches batch payloads with metadata including queue wait time and worker identification. The allowFailures() method prevents a single failing job from cancelling the entire batch — critical for large imports where partial success is acceptable.

Job Chaining for Sequential Workflows

While batching handles parallel workloads, chaining guarantees sequential execution. Each job in the chain runs only after the previous one succeeds. If any job fails, the remaining chain is abandoned and the catch callback fires.

App/Services/OrderWorkflow.phpphp
use App\Jobs\ValidatePayment;
use App\Jobs\ReserveInventory;
use App\Jobs\SendConfirmation;
use App\Jobs\GenerateShippingLabel;
use Illuminate\Support\Facades\Bus;

public function processOrder(Order $order): void
{
    // Each job runs only after the previous one succeeds
    Bus::chain([
        new ValidatePayment($order),
        new ReserveInventory($order),
        new GenerateShippingLabel($order),
        new SendConfirmation($order),
    ])
    ->onQueue('orders')
    ->catch(function (\Throwable $e) use ($order) {
        // Roll back the order if any step fails
        $order->update(['status' => 'failed']);
        logger()->error('Order chain failed', [
            'order_id' => $order->id,
            'step' => $e->getMessage(),
        ]);
    })
    ->dispatch();
}

Chaining excels for domain workflows where step order matters — payment must validate before inventory reserves, and shipping labels depend on confirmed inventory.

Ready to ace your Laravel interviews?

Practice with our interactive simulators, flashcards, and technical tests.

Queue Middleware for Cross-Cutting Concerns

Queue middleware wraps job execution with reusable logic: rate limiting, deduplication, or circuit breaking. Rather than embedding these concerns inside every job, middleware keeps jobs focused on business logic.

App/Jobs/Middleware/RateLimitedJob.phpphp
namespace App\Jobs\Middleware;

use Closure;
use Illuminate\Support\Facades\RateLimiter;

class RateLimitedJob
{
    public function __construct(
        private string $key,
        private int $maxAttempts = 10,
        private int $decaySeconds = 60
    ) {}

    public function handle(object $job, Closure $next): void
    {
        // Release job back to queue if rate limit exceeded
        if (RateLimiter::tooManyAttempts($this->key, $this->maxAttempts)) {
            $job->release($this->decaySeconds);
            return;
        }

        RateLimiter::hit($this->key, $this->decaySeconds);

        $next($job);
    }
}

Apply middleware by defining a middleware() method on the job class:

App/Jobs/CallExternalApi.phpphp
public function middleware(): array
{
    return [
        new RateLimitedJob(
            key: 'external-api',
            maxAttempts: 30,
            decaySeconds: 60
        ),
        // Prevent duplicate jobs from running concurrently
        (new WithoutOverlapping($this->apiResource->id))
            ->releaseAfter(300)
            ->expireAfter(600),
    ];
}

The WithoutOverlapping middleware uses atomic locks to ensure only one instance of a job (identified by a key) runs at a time. Combined with rate limiting, this prevents both duplicate processing and API throttling.

Failed Job Handling and Retry Strategies

Production queue systems need robust failure handling. Laravel stores failed jobs in the failed_jobs table with full payload, exception trace, and the queue/connection that produced the failure. The failed() method on each job class runs after all retry attempts are exhausted.

Configuring retry behavior per job provides fine-grained control:

App/Jobs/SyncExternalData.phpphp
class SyncExternalData implements ShouldQueue
{
    use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;

    public int $tries = 5;

    // Exponential backoff: 10s, 30s, 60s, 120s, 300s
    public function backoff(): array
    {
        return [10, 30, 60, 120, 300];
    }

    // Job-specific timeout
    public int $timeout = 180;

    // Maximum exceptions before marking as failed
    public int $maxExceptions = 3;

    public function retryUntil(): \DateTime
    {
        // Keep retrying for up to 24 hours
        return now()->addHours(24);
    }

    public function handle(): void
    {
        $response = Http::timeout(30)
            ->retry(2, 1000)
            ->get('https://api.vendor.com/data');

        if ($response->failed()) {
            // Release back to queue with delay for transient failures
            $this->release(60);
            return;
        }

        DataSync::process($response->json());
    }

    public function failed(\Throwable $e): void
    {
        Notification::route('slack', config('services.slack.ops_channel'))
            ->notify(new SyncFailed($e));
    }
}

The distinction between $tries, $maxExceptions, and retryUntil() matters in interviews. $tries counts every attempt including manual releases. $maxExceptions counts only unhandled exceptions. retryUntil() sets a time window regardless of attempt count.

Queue Worker Management and Deployment

Queue workers in production require process supervision, graceful restarts during deployment, and resource management. Supervisor is the standard tool for keeping workers alive.

ini
; /etc/supervisor/conf.d/laravel-worker.conf
[program:laravel-worker]
process_name=%(program_name)s_%(process_num)02d
command=php /var/www/app/artisan queue:work redis --sleep=3 --tries=3 --max-time=3600
autostart=true
autorestart=true
stopwaitsecs=3600
user=www-data
numprocs=4
redirect_stderr=true
stdout_logfile=/var/log/worker.log
stopasgroup=true
killasgroup=true

Key deployment considerations:

  • Graceful restart: php artisan queue:restart signals workers to finish their current job before restarting. This prevents job corruption during deploys.
  • Max time/jobs: --max-time=3600 and --max-jobs=1000 prevent memory leaks by recycling worker processes periodically.
  • Sleep interval: --sleep=3 controls how long the worker waits before polling an empty queue again. Lower values increase responsiveness but also database/Redis load.
  • Multiple queues: --queue=critical,default,low processes queues in priority order. Workers drain the critical queue before touching default.

Laravel 12.37 introduced the background queue connection, which defers jobs using Concurrently::defer(). This driver serializes and runs the job in a separate PHP process — useful for lightweight jobs that do not justify a full queue infrastructure.

Unique Jobs and Encrypted Payloads

Two patterns frequently appear in production and interview discussions: ensuring a job runs only once for a given key, and protecting sensitive data in job payloads.

App/Jobs/RebuildSearchIndex.phpphp
use Illuminate\Contracts\Queue\ShouldBeUnique;

class RebuildSearchIndex implements ShouldQueue, ShouldBeUnique
{
    // Lock duration in seconds
    public int $uniqueFor = 3600;

    public function __construct(
        public readonly string $indexName
    ) {}

    // Unique key scopes the lock to this specific index
    public function uniqueId(): string
    {
        return $this->indexName;
    }

    public function handle(): void
    {
        SearchIndex::rebuild($this->indexName);
    }
}

For jobs carrying sensitive data (user credentials, payment tokens), the ShouldBeEncrypted interface encrypts the entire serialized payload at rest:

App/Jobs/ProcessPayment.phpphp
use Illuminate\Contracts\Queue\ShouldBeEncrypted;

class ProcessPayment implements ShouldQueue, ShouldBeEncrypted
{
    public function __construct(
        private string $paymentToken,
        private float $amount
    ) {}

    public function handle(PaymentGateway $gateway): void
    {
        $gateway->charge($this->paymentToken, $this->amount);
    }
}

The payload is encrypted with the application key before being stored in Redis or the database. Workers decrypt it automatically before deserialization.

Common Interview Questions on Laravel Queues

Technical interviews frequently test queue architecture understanding beyond surface-level API knowledge.

What happens when a queued job references a deleted Eloquent model? With SerializesModels, the worker attempts to fetch the model by ID when processing the job. If the model no longer exists, Laravel throws a ModelNotFoundException. To handle this gracefully, set the $deleteWhenMissingModels property to true — the job silently deletes from the queue instead of failing.

How does ShouldBeUnique differ from WithoutOverlapping middleware? ShouldBeUnique prevents a job from being dispatched if one with the same unique key already exists in the queue. WithoutOverlapping allows dispatch but prevents concurrent execution — if a job with the same key is already running, the new instance is released back to the queue. They solve different problems and can be combined.

When should retryUntil() be preferred over $tries? Use retryUntil() for jobs that interact with external services where recovery time is unpredictable. A fixed retry count ($tries = 3) may exhaust attempts during a brief outage. retryUntil() sets a time window (e.g., 24 hours) and keeps retrying with backoff until the service recovers or the window expires.

How do queue priorities work with multiple queues? Running queue:work --queue=critical,default,low creates a priority system. The worker fully drains the critical queue before checking default, and default before low. This means low-priority jobs may starve during peak load. For strict SLAs, dedicated workers per queue provide better guarantees.

Start practicing!

Test your knowledge with our interview simulators and technical tests.

Conclusion

  • Laravel queues abstract the queue backend behind a driver-agnostic API supporting Redis, SQS, database, and the new background connection in Laravel 12.37
  • Job batching handles parallel workloads with collective progress tracking, while chaining enforces sequential execution for domain workflows
  • Queue middleware (rate limiting, WithoutOverlapping) keeps cross-cutting concerns out of job business logic
  • Failed job handling combines $tries, $maxExceptions, retryUntil(), and exponential backoff for resilient retry strategies
  • ShouldBeUnique prevents duplicate dispatch; ShouldBeEncrypted protects sensitive payloads at rest
  • Production workers require Supervisor, graceful restarts during deploys, and memory management via --max-time and --max-jobs
  • Interview preparation should cover the distinction between dispatch-time uniqueness and execution-time locking, model serialization behavior, and queue priority starvation

For hands-on practice with Laravel interview questions, the SharpSkill question bank covers queues, middleware, and Eloquent patterns with detailed explanations.

Start practicing!

Test your knowledge with our interview simulators and technical tests.

Tags

#laravel
#queues
#jobs
#php
#async
#architecture

Share

Related articles