0x55aa
← Back to Blog

Idempotency Keys: Stop Creating Duplicate Orders When Users Double-Click the Payment Button šŸ”‘šŸ’³

•12 min read

Idempotency Keys: Stop Creating Duplicate Orders When Users Double-Click the Payment Button šŸ”‘šŸ’³

True story: our e-commerce backend processed 34 duplicate orders on a single Black Friday morning. Same user, same cart, same payment method — charged twice. Some users caught it. Some didn't. Our support queue was a warzone.

The root cause? A 3-second API timeout combined with an anxious user who clicked "Complete Purchase" a second time. The first request was still in-flight, processing the payment. The second request came in, saw no order in progress, and happily started a new one.

Two charges. One order. One very angry customer email.

The fix is called idempotency. It's a fancy word for a simple idea, and it's one of those architectural concepts that seems obvious in hindsight and infuriatingly opaque before you've needed it.

What's Idempotency? šŸ¤”

An operation is idempotent if doing it multiple times produces the same result as doing it once.

Idempotent:    DELETE /orders/123  → deletes order 123
               DELETE /orders/123  → order already gone, same result āœ…

Not idempotent: POST /orders/checkout → creates new order #456
                POST /orders/checkout → creates ANOTHER new order #457 šŸ’€

HTTP already got this partly right. GET, PUT, and DELETE are meant to be idempotent by design. POST is not — that's why hitting back/forward on a form gives you that "Resubmit?" dialog.

The problem is that in production, we make POST requests look like they're idempotent... until they're not. Retries, double-clicks, network hiccups, and impatient users all trigger duplicate POSTs. And if your backend isn't designed for it, you get duplicate orders, double charges, and "I got charged twice" tickets.

The Idempotency Key Pattern šŸ—ļø

The solution is simple: the client sends a unique key with every mutating request. If the server sees the same key twice, it returns the original response instead of executing again.

First request:
  POST /checkout
  Idempotency-Key: a7f3d2c1-8b4e-4f9a-b2d1-c6e0f3a2b5d8
  Body: { cart_id: 42, payment_token: "tok_abc123" }
  → Server processes. Charges card. Creates order #456. Stores result.
  → Response: 201 { order_id: 456, total: 89.99 }

Second request (user double-clicked, or client retried):
  POST /checkout
  Idempotency-Key: a7f3d2c1-8b4e-4f9a-b2d1-c6e0f3a2b5d8  ← same key!
  Body: { cart_id: 42, payment_token: "tok_abc123" }
  → Server looks up key. Finds stored result. Returns ORIGINAL response.
  → Response: 201 { order_id: 456, total: 89.99 }  ← same order, no new charge āœ…

One charge. One order. One idempotency key to rule them all.

The key is generated by the client (usually a UUID v4), not the server. The server's job is to remember what it did when it first saw that key and replay the response forever after.

The Architecture: What Lives Where šŸ—ļø

Client                  API Server              Database
  │                         │                      │
  │  POST /checkout          │                      │
  │  Key: uuid-abc123   ──►  │                      │
  │                         │  Lookup key in       │
  │                         │  idempotency store ──►│
  │                         │                      │  Not found → proceed
  │                         │                      │◄──
  │                         │  BEGIN transaction   │
  │                         │  - charge payment    │
  │                         │  - create order ─────►│
  │                         │  - store result ─────►│  (key + response)
  │                         │  COMMIT              │
  │◄── 201 { order: 456 } ──│                      │
  │                         │                      │
  │  (user clicks again)    │                      │
  │  POST /checkout          │                      │
  │  Key: uuid-abc123   ──►  │                      │
  │                         │  Lookup key in       │
  │                         │  idempotency store ──►│
  │                         │                      │  Found! Return cached
  │                         │◄─────────────────────│  response
  │◄── 201 { order: 456 } ──│  (no processing)     │
  │     (same response!)    │                      │

The idempotency store can be Redis (fast, ephemeral), your main database (simple, durable), or a dedicated table. We use Redis with a 24-hour TTL — long enough to catch any retry scenario, short enough to keep memory manageable.

Real Code: Laravel Implementation 🐘

When designing our e-commerce backend, I built idempotency as middleware so every checkout route gets it automatically:

// app/Http/Middleware/IdempotencyMiddleware.php
namespace App\Http\Middleware;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Cache;
use Closure;

class IdempotencyMiddleware
{
    public function handle(Request $request, Closure $next)
    {
        // Only apply to mutating requests
        if (!in_array($request->method(), ['POST', 'PUT', 'PATCH'])) {
            return $next($request);
        }

        $key = $request->header('Idempotency-Key');

        // If no key provided, proceed normally (optional enforcement)
        if (!$key) {
            return $next($request);
        }

        // Validate key format (UUID v4)
        if (!preg_match('/^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i', $key)) {
            return response()->json(['error' => 'Invalid Idempotency-Key format'], 400);
        }

        $cacheKey = "idempotency:{$key}";

        // Check for existing result
        $cached = Cache::get($cacheKey);
        if ($cached !== null) {
            // Return the original response — no re-processing
            return response()->json(
                $cached['body'],
                $cached['status'],
                ['X-Idempotent-Replayed' => 'true']
            );
        }

        // Mark as "in progress" to handle concurrent duplicate requests
        $inProgress = Cache::add("idempotency:lock:{$key}", true, 30);
        if (!$inProgress) {
            return response()->json(
                ['error' => 'Request with this key is already being processed'],
                409
            );
        }

        // Process the actual request
        $response = $next($request);

        // Store the result (24 hour TTL)
        Cache::put($cacheKey, [
            'body'   => $response->getData(true),
            'status' => $response->getStatusCode(),
        ], now()->addHours(24));

        // Release the lock
        Cache::forget("idempotency:lock:{$key}");

        return $response;
    }
}
// Register in bootstrap/app.php or Kernel.php
Route::middleware(['idempotency'])->group(function () {
    Route::post('/checkout', [CheckoutController::class, 'process']);
    Route::post('/payments/charge', [PaymentController::class, 'charge']);
    Route::post('/subscriptions', [SubscriptionController::class, 'create']);
});

The X-Idempotent-Replayed: true header is a nice touch — clients can detect they got a cached response and skip any "order created" analytics events to avoid double-counting.

Node.js Version ⚔

const redis = require('ioredis');
const { v4: uuidv4, validate: uuidValidate } = require('uuid');

const redisClient = new redis(process.env.REDIS_URL);
const IDEMPOTENCY_TTL = 24 * 60 * 60; // 24 hours in seconds

async function idempotencyMiddleware(req, res, next) {
    const idempotencyKey = req.headers['idempotency-key'];

    if (!idempotencyKey || !['POST', 'PUT', 'PATCH'].includes(req.method)) {
        return next();
    }

    if (!uuidValidate(idempotencyKey)) {
        return res.status(400).json({ error: 'Invalid Idempotency-Key format' });
    }

    const cacheKey = `idempotency:${idempotencyKey}`;
    const lockKey  = `idempotency:lock:${idempotencyKey}`;

    // Check for existing result
    const cached = await redisClient.get(cacheKey);
    if (cached) {
        const { body, statusCode } = JSON.parse(cached);
        res.set('X-Idempotent-Replayed', 'true');
        return res.status(statusCode).json(body);
    }

    // Acquire distributed lock (NX = only set if not exists)
    const lockAcquired = await redisClient.set(lockKey, '1', 'EX', 30, 'NX');
    if (!lockAcquired) {
        return res.status(409).json({
            error: 'A request with this idempotency key is already in flight'
        });
    }

    // Intercept the response to cache it
    const originalJson = res.json.bind(res);
    res.json = async (body) => {
        if (res.statusCode < 500) {
            // Only cache successful/client-error responses, not server errors
            // (server errors might be transient — let the client retry)
            await redisClient.setex(
                cacheKey,
                IDEMPOTENCY_TTL,
                JSON.stringify({ body, statusCode: res.statusCode })
            );
        }
        await redisClient.del(lockKey);
        return originalJson(body);
    };

    next();
}

The nuance that bit us: don't cache 5xx responses. If your payment processor is temporarily down and returns a 500, you don't want that failure cached for 24 hours. The client should be able to retry and hit the real service once it recovers.

The Concurrent Request Problem šŸ”’

Here's a subtle race condition that trips up most first implementations:

T=0ms: Request A arrives (key: abc123) → not in cache → starts processing
T=5ms: Request B arrives (key: abc123) → not in cache yet! → ALSO starts processing
T=500ms: Request A finishes → charges $89.99 → stores result
T=505ms: Request B finishes → charges $89.99 → OVERWRITES result

Result: Two charges, customer never knew. The cache entry looks "correct"
        because both requests stored the same-looking response.

The distributed lock (NX flag in Redis) solves this. Request B hits the lock and gets a 409. The client retries after a short delay, now finds the cached result from Request A, and returns it. No double charge.

Without lock:               With lock:
T=0ms: A starts ──────►     T=0ms:  A starts, acquires lock ──►
T=5ms: B starts ──────►     T=5ms:  B arrives, lock exists → 409
T=500ms: A charges card      T=500ms: A charges card, stores result
T=505ms: B ALSO charges!     T=510ms: B retries → hits cache → returns A's result āœ…

Where to Store Idempotency Keys šŸ—„ļø

Option 1: Redis (what we use)

āœ… Fast (sub-millisecond lookup)
āœ… TTL built-in (auto-cleanup after 24h)
āœ… Atomic NX for distributed locking
āŒ Ephemeral — Redis restart loses keys
āŒ Another dependency to manage

Option 2: Database table

CREATE TABLE idempotency_keys (
    key         VARCHAR(36) PRIMARY KEY,
    user_id     BIGINT NOT NULL,
    endpoint    VARCHAR(255) NOT NULL,
    response    JSON,
    status_code SMALLINT,
    created_at  TIMESTAMP DEFAULT NOW(),
    expires_at  TIMESTAMP NOT NULL,
    INDEX idx_expires_at (expires_at)  -- for cleanup jobs
);
āœ… Durable (survives Redis restart)
āœ… Can enforce per-user key uniqueness
āœ… Auditable — see every idempotency replay
āŒ Slower (DB query on every POST)
āŒ Table grows — needs cleanup job

A scalability lesson that cost us a painful incident: we started with just Redis. During a Redis failover, keys were lost mid-checkout. Users who retried during the 90-second recovery window got duplicate charges because the idempotency check came back empty. We now write critical idempotency records to Postgres with Redis as a cache in front.

The Client Side Matters Too šŸ–„ļø

Idempotency keys only work if the client generates and reuses them correctly:

// āŒ Wrong: new key on every retry — defeats the entire purpose
async function checkout(cart) {
    for (let attempt = 0; attempt < 3; attempt++) {
        const response = await fetch('/checkout', {
            method: 'POST',
            headers: {
                'Idempotency-Key': crypto.randomUUID(), // new key each time!
                'Content-Type': 'application/json',
            },
            body: JSON.stringify(cart),
        });
        if (response.ok) return response.json();
    }
}

// āœ… Correct: generate key ONCE, reuse on all retries
async function checkout(cart) {
    const idempotencyKey = crypto.randomUUID(); // generated ONCE before the loop

    for (let attempt = 0; attempt < 3; attempt++) {
        try {
            const response = await fetch('/checkout', {
                method: 'POST',
                headers: {
                    'Idempotency-Key': idempotencyKey, // same key every retry
                    'Content-Type': 'application/json',
                },
                body: JSON.stringify(cart),
            });
            if (response.ok || response.status === 409) return response.json();
            if (response.status >= 500) {
                await sleep(1000 * (attempt + 1)); // exponential backoff
                continue;
            }
            return response.json(); // 4xx — don't retry
        } catch (networkError) {
            await sleep(1000 * (attempt + 1));
        }
    }
}

The key must be tied to a specific intent, not a specific attempt. "I want to check out cart #42 right now" gets one key. If the user goes back, modifies the cart, and tries again — that's a new intent, generate a new key.

Common Mistakes I Made āŒ

Mistake #1: Scoping keys globally instead of per-user

// āŒ User A sends key "abc123" → result cached globally
// User B guesses or observes key "abc123" → gets User A's order data!
$cacheKey = "idempotency:{$key}";

// āœ… Always scope by user
$cacheKey = "idempotency:{$userId}:{$key}";

Mistake #2: Caching 5xx responses

A timeout from your payment processor is a server error. Cache that response, and the next retry will get "payment failed" even after the processor recovered. Only cache 2xx and 4xx.

Mistake #3: Not validating that the key matches the request body

If someone reuses the same key with a different request body, should you return the original response or reject it? Stripe rejects it (returns 422 "Idempotency-Key is already used"). We do the same — a key is tied to one specific payload hash.

// Store payload hash alongside the response
$payloadHash = hash('sha256', $request->getContent());
Cache::put($cacheKey, [
    'body'         => $response->getData(true),
    'status'       => $response->getStatusCode(),
    'payload_hash' => $payloadHash,
], now()->addHours(24));

// On replay, verify it's the same payload
if ($cached['payload_hash'] !== hash('sha256', $request->getContent())) {
    return response()->json(
        ['error' => 'Idempotency-Key already used for a different request body'],
        422
    );
}

When Does Idempotency Matter Most? āš–ļø

Operation Idempotency Needed? Why
Payment / charge Critical Money. Need I say more?
Order creation Critical Duplicate orders = refund nightmare
Email sends High Getting the same email 3x is annoying
Inventory reservation High Double-reserving = overselling
User registration Medium Usually caught by unique constraints
Analytics events Low Duplicate events are mostly harmless
GET requests None Already idempotent by nature

TL;DR šŸŽÆ

The Problem:
  User double-clicks → two POST requests → two charges → angry customer
  Network retry → same POST twice → duplicate order → audit nightmare

The Solution (Idempotency Keys):
  Client generates UUID once per "intent"
  Client sends UUID as Idempotency-Key header on every attempt
  Server checks: have I seen this key before?
    → Yes: return cached response, skip processing āœ…
    → No: process normally, store result with key
  Double-clicks and retries become harmless

Implementation checklist:
  āœ… Generate key on client (UUID v4), reuse across retries
  āœ… Server checks cache before processing
  āœ… Use distributed lock for concurrent duplicate requests (Redis NX)
  āœ… Cache 2xx and 4xx, never 5xx (5xx may be transient)
  āœ… Scope keys per user (not globally)
  āœ… Validate payload hash on replay (optional but safe)
  āœ… TTL of 24h covers all realistic retry windows

What idempotency is NOT:
  āœ— A substitute for transactions (still use those inside the handler)
  āœ— A deduplication system (different users can have same cart, different keys)
  āœ— A replacement for proper error handling

As a Technical Lead, I've learned that distributed systems have three certainties: death, taxes, and network retries. You can't stop users from double-clicking or clients from retrying on timeout. What you can control is whether those retries cause duplicate charges or just return a politely cached "yes, we already did that."

34 duplicate orders on Black Friday taught me that lesson better than any architecture book ever could.


Built idempotency into an API and have war stories? Find me on LinkedIn — especially if you have a story about the client team generating a new UUID on every retry. Those are my favorite 2 AM debugging sessions.

Want to see the full middleware implementation including payload hashing and audit logs? Check out GitHub.

Make your APIs make peace with retries. šŸ”‘


P.S. Stripe's idempotency key docs are genuinely the best in the industry. If you want to see idempotency done right at scale, read them — then copy shamelessly. We did. āœ