Retry with Exponential Backoff

resilienceEasy
Applicability

When to Use

When transient network errors are common
When APIs have rate limits
When eventual success is acceptable
Overview

How It Works

This pattern wraps MCP server calls with automatic retry logic. When a call fails with a retryable error (network timeout, 429 rate limit, 503 unavailable), it waits an exponentially increasing delay before retrying. Non-retryable errors (400 bad request, 404 not found) fail immediately. Exponential backoff prevents thundering herd problems where many clients retry simultaneously. Adding jitter (random variation) further spreads out retries.
Implementation

Code Example

typescript
async function withRetry(fn, { maxRetries = 3, baseDelay = 1000, maxDelay = 30000 } = {}) {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    try {
      return await fn();
    } catch (error) {
      if (attempt === maxRetries || !isRetryable(error)) throw error;
      const delay = Math.min(baseDelay * Math.pow(2, attempt) + Math.random() * 1000, maxDelay);
      console.log(`Retry ${attempt + 1}/${maxRetries} in ${delay}ms: ${error.message}`);
      await sleep(delay);
    }
  }
}

function isRetryable(error) {
  const retryableCodes = [429, 500, 502, 503, 504];
  return error.code === "ECONNRESET" || error.code === "ETIMEDOUT" || retryableCodes.includes(error.status);
}

// Usage
const data = await withRetry(() => postgres.query("SELECT * FROM large_table"));

Quick Info

Categoryresilience
ComplexityEasy

Need Architecture Help?

Our team designs custom automation architectures.

Get in Touch
CortexAgent Customer Service

Want to skip the form?

Our team is available to help you get started with CortexAgent.

This chat may be recorded for quality assurance. You can view our Privacy Policy.