Skip to main content
When your handler throws, call msg.nack(). AsyncBase schedules a retry with exponential backoff. After N nacks (default 3), the message lands in the queue’s Dead Letter Queue.

Backoff formula

backoff_ms = 2^attempt × 1000

attempt=1 → 2s
attempt=2 → 4s
attempt=3 → 8s
attempt=4 → 16s
attempt=5 → 32s
Default max attempts: 3. Override per-message:
await q.send("emails", payload, { retries: 10 })

Inspect the DLQ

curl https://api.asyncbase.dev/v1/queues/emails/dlq?limit=50 \
  -H "Authorization: Bearer $ASYNCBASE_KEY"
{
  "queue": "emails",
  "total": 12,
  "returned": 12,
  "messages": [
    {
      "msg_id": "msg_abc",
      "payload": "{...}",
      "original_enqueued_at": "...",
      "delivery_count": 3,
      "moved_to_dlq_at": "...",
      "group": "workers",
      "reason": "max_retries_exceeded"
    }
  ]
}

Redrive

curl -X POST https://api.asyncbase.dev/v1/queues/emails/dlq/redrive \
  -H "Authorization: Bearer $ASYNCBASE_KEY" \
  -H "Content-Type: application/json" \
  -d '{"batch": 100}'
Default batch is 100, max 1000. Calling twice is safe but pulls the NEXT batch, not the same one.

Alerts on DLQ threshold

Set up from /settings/alerts or via API:
curl -X POST https://api.asyncbase.dev/v1/alerts \
  -H "Authorization: Bearer $ASYNCBASE_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "dlq_threshold",
    "queue_name": "emails",
    "threshold": 100,
    "webhook_url": "https://hooks.slack.com/...",
    "webhook_kind": "slack"
  }'
Debounced 1 hour per rule. Omit queue_name to watch all queues.

Best practices

The nack endpoint expects attempt=<current value>. All SDKs do this automatically via the bound msg.nack().
At-least-once delivery means you WILL see the same message twice under load. Key your side effects by msg.id or a business key.
If the DLQ grows silently, users churn before you notice.
First fix whatever broke, then redrive. Otherwise the same 100 messages bounce back into the DLQ.