Idempotency Keys: Stop Creating Duplicate Orders When Users Double-Click the Payment Button šš³
Idempotency Keys: Stop Creating Duplicate Orders When Users Double-Click the Payment Button šš³
True story: our e-commerce backend processed 34 duplicate orders on a single Black Friday morning. Same user, same cart, same payment method ā charged twice. Some users caught it. Some didn't. Our support queue was a warzone.
The root cause? A 3-second API timeout combined with an anxious user who clicked "Complete Purchase" a second time. The first request was still in-flight, processing the payment. The second request came in, saw no order in progress, and happily started a new one.
Two charges. One order. One very angry customer email.
The fix is called idempotency. It's a fancy word for a simple idea, and it's one of those architectural concepts that seems obvious in hindsight and infuriatingly opaque before you've needed it.
What's Idempotency? š¤
An operation is idempotent if doing it multiple times produces the same result as doing it once.
Idempotent: DELETE /orders/123 ā deletes order 123
DELETE /orders/123 ā order already gone, same result ā
Not idempotent: POST /orders/checkout ā creates new order #456
POST /orders/checkout ā creates ANOTHER new order #457 š
HTTP already got this partly right. GET, PUT, and DELETE are meant to be idempotent by design. POST is not ā that's why hitting back/forward on a form gives you that "Resubmit?" dialog.
The problem is that in production, we make POST requests look like they're idempotent... until they're not. Retries, double-clicks, network hiccups, and impatient users all trigger duplicate POSTs. And if your backend isn't designed for it, you get duplicate orders, double charges, and "I got charged twice" tickets.
The Idempotency Key Pattern šļø
The solution is simple: the client sends a unique key with every mutating request. If the server sees the same key twice, it returns the original response instead of executing again.
First request:
POST /checkout
Idempotency-Key: a7f3d2c1-8b4e-4f9a-b2d1-c6e0f3a2b5d8
Body: { cart_id: 42, payment_token: "tok_abc123" }
ā Server processes. Charges card. Creates order #456. Stores result.
ā Response: 201 { order_id: 456, total: 89.99 }
Second request (user double-clicked, or client retried):
POST /checkout
Idempotency-Key: a7f3d2c1-8b4e-4f9a-b2d1-c6e0f3a2b5d8 ā same key!
Body: { cart_id: 42, payment_token: "tok_abc123" }
ā Server looks up key. Finds stored result. Returns ORIGINAL response.
ā Response: 201 { order_id: 456, total: 89.99 } ā same order, no new charge ā
One charge. One order. One idempotency key to rule them all.
The key is generated by the client (usually a UUID v4), not the server. The server's job is to remember what it did when it first saw that key and replay the response forever after.
The Architecture: What Lives Where šļø
Client API Server Database
ā ā ā
ā POST /checkout ā ā
ā Key: uuid-abc123 āāāŗ ā ā
ā ā Lookup key in ā
ā ā idempotency store āāāŗā
ā ā ā Not found ā proceed
ā ā āāāā
ā ā BEGIN transaction ā
ā ā - charge payment ā
ā ā - create order āāāāāāŗā
ā ā - store result āāāāāāŗā (key + response)
ā ā COMMIT ā
āāāā 201 { order: 456 } āāā ā
ā ā ā
ā (user clicks again) ā ā
ā POST /checkout ā ā
ā Key: uuid-abc123 āāāŗ ā ā
ā ā Lookup key in ā
ā ā idempotency store āāāŗā
ā ā ā Found! Return cached
ā āāāāāāāāāāāāāāāāāāāāāāāā response
āāāā 201 { order: 456 } āāā (no processing) ā
ā (same response!) ā ā
The idempotency store can be Redis (fast, ephemeral), your main database (simple, durable), or a dedicated table. We use Redis with a 24-hour TTL ā long enough to catch any retry scenario, short enough to keep memory manageable.
Real Code: Laravel Implementation š
When designing our e-commerce backend, I built idempotency as middleware so every checkout route gets it automatically:
// app/Http/Middleware/IdempotencyMiddleware.php
namespace App\Http\Middleware;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Cache;
use Closure;
class IdempotencyMiddleware
{
public function handle(Request $request, Closure $next)
{
// Only apply to mutating requests
if (!in_array($request->method(), ['POST', 'PUT', 'PATCH'])) {
return $next($request);
}
$key = $request->header('Idempotency-Key');
// If no key provided, proceed normally (optional enforcement)
if (!$key) {
return $next($request);
}
// Validate key format (UUID v4)
if (!preg_match('/^[0-9a-f]{8}-[0-9a-f]{4}-4[0-9a-f]{3}-[89ab][0-9a-f]{3}-[0-9a-f]{12}$/i', $key)) {
return response()->json(['error' => 'Invalid Idempotency-Key format'], 400);
}
$cacheKey = "idempotency:{$key}";
// Check for existing result
$cached = Cache::get($cacheKey);
if ($cached !== null) {
// Return the original response ā no re-processing
return response()->json(
$cached['body'],
$cached['status'],
['X-Idempotent-Replayed' => 'true']
);
}
// Mark as "in progress" to handle concurrent duplicate requests
$inProgress = Cache::add("idempotency:lock:{$key}", true, 30);
if (!$inProgress) {
return response()->json(
['error' => 'Request with this key is already being processed'],
409
);
}
// Process the actual request
$response = $next($request);
// Store the result (24 hour TTL)
Cache::put($cacheKey, [
'body' => $response->getData(true),
'status' => $response->getStatusCode(),
], now()->addHours(24));
// Release the lock
Cache::forget("idempotency:lock:{$key}");
return $response;
}
}
// Register in bootstrap/app.php or Kernel.php
Route::middleware(['idempotency'])->group(function () {
Route::post('/checkout', [CheckoutController::class, 'process']);
Route::post('/payments/charge', [PaymentController::class, 'charge']);
Route::post('/subscriptions', [SubscriptionController::class, 'create']);
});
The X-Idempotent-Replayed: true header is a nice touch ā clients can detect they got a cached response and skip any "order created" analytics events to avoid double-counting.
Node.js Version ā”
const redis = require('ioredis');
const { v4: uuidv4, validate: uuidValidate } = require('uuid');
const redisClient = new redis(process.env.REDIS_URL);
const IDEMPOTENCY_TTL = 24 * 60 * 60; // 24 hours in seconds
async function idempotencyMiddleware(req, res, next) {
const idempotencyKey = req.headers['idempotency-key'];
if (!idempotencyKey || !['POST', 'PUT', 'PATCH'].includes(req.method)) {
return next();
}
if (!uuidValidate(idempotencyKey)) {
return res.status(400).json({ error: 'Invalid Idempotency-Key format' });
}
const cacheKey = `idempotency:${idempotencyKey}`;
const lockKey = `idempotency:lock:${idempotencyKey}`;
// Check for existing result
const cached = await redisClient.get(cacheKey);
if (cached) {
const { body, statusCode } = JSON.parse(cached);
res.set('X-Idempotent-Replayed', 'true');
return res.status(statusCode).json(body);
}
// Acquire distributed lock (NX = only set if not exists)
const lockAcquired = await redisClient.set(lockKey, '1', 'EX', 30, 'NX');
if (!lockAcquired) {
return res.status(409).json({
error: 'A request with this idempotency key is already in flight'
});
}
// Intercept the response to cache it
const originalJson = res.json.bind(res);
res.json = async (body) => {
if (res.statusCode < 500) {
// Only cache successful/client-error responses, not server errors
// (server errors might be transient ā let the client retry)
await redisClient.setex(
cacheKey,
IDEMPOTENCY_TTL,
JSON.stringify({ body, statusCode: res.statusCode })
);
}
await redisClient.del(lockKey);
return originalJson(body);
};
next();
}
The nuance that bit us: don't cache 5xx responses. If your payment processor is temporarily down and returns a 500, you don't want that failure cached for 24 hours. The client should be able to retry and hit the real service once it recovers.
The Concurrent Request Problem š
Here's a subtle race condition that trips up most first implementations:
T=0ms: Request A arrives (key: abc123) ā not in cache ā starts processing
T=5ms: Request B arrives (key: abc123) ā not in cache yet! ā ALSO starts processing
T=500ms: Request A finishes ā charges $89.99 ā stores result
T=505ms: Request B finishes ā charges $89.99 ā OVERWRITES result
Result: Two charges, customer never knew. The cache entry looks "correct"
because both requests stored the same-looking response.
The distributed lock (NX flag in Redis) solves this. Request B hits the lock and gets a 409. The client retries after a short delay, now finds the cached result from Request A, and returns it. No double charge.
Without lock: With lock:
T=0ms: A starts āāāāāāāŗ T=0ms: A starts, acquires lock āāāŗ
T=5ms: B starts āāāāāāāŗ T=5ms: B arrives, lock exists ā 409
T=500ms: A charges card T=500ms: A charges card, stores result
T=505ms: B ALSO charges! T=510ms: B retries ā hits cache ā returns A's result ā
Where to Store Idempotency Keys šļø
Option 1: Redis (what we use)
ā
Fast (sub-millisecond lookup)
ā
TTL built-in (auto-cleanup after 24h)
ā
Atomic NX for distributed locking
ā Ephemeral ā Redis restart loses keys
ā Another dependency to manage
Option 2: Database table
CREATE TABLE idempotency_keys (
key VARCHAR(36) PRIMARY KEY,
user_id BIGINT NOT NULL,
endpoint VARCHAR(255) NOT NULL,
response JSON,
status_code SMALLINT,
created_at TIMESTAMP DEFAULT NOW(),
expires_at TIMESTAMP NOT NULL,
INDEX idx_expires_at (expires_at) -- for cleanup jobs
);
ā
Durable (survives Redis restart)
ā
Can enforce per-user key uniqueness
ā
Auditable ā see every idempotency replay
ā Slower (DB query on every POST)
ā Table grows ā needs cleanup job
A scalability lesson that cost us a painful incident: we started with just Redis. During a Redis failover, keys were lost mid-checkout. Users who retried during the 90-second recovery window got duplicate charges because the idempotency check came back empty. We now write critical idempotency records to Postgres with Redis as a cache in front.
The Client Side Matters Too š„ļø
Idempotency keys only work if the client generates and reuses them correctly:
// ā Wrong: new key on every retry ā defeats the entire purpose
async function checkout(cart) {
for (let attempt = 0; attempt < 3; attempt++) {
const response = await fetch('/checkout', {
method: 'POST',
headers: {
'Idempotency-Key': crypto.randomUUID(), // new key each time!
'Content-Type': 'application/json',
},
body: JSON.stringify(cart),
});
if (response.ok) return response.json();
}
}
// ā
Correct: generate key ONCE, reuse on all retries
async function checkout(cart) {
const idempotencyKey = crypto.randomUUID(); // generated ONCE before the loop
for (let attempt = 0; attempt < 3; attempt++) {
try {
const response = await fetch('/checkout', {
method: 'POST',
headers: {
'Idempotency-Key': idempotencyKey, // same key every retry
'Content-Type': 'application/json',
},
body: JSON.stringify(cart),
});
if (response.ok || response.status === 409) return response.json();
if (response.status >= 500) {
await sleep(1000 * (attempt + 1)); // exponential backoff
continue;
}
return response.json(); // 4xx ā don't retry
} catch (networkError) {
await sleep(1000 * (attempt + 1));
}
}
}
The key must be tied to a specific intent, not a specific attempt. "I want to check out cart #42 right now" gets one key. If the user goes back, modifies the cart, and tries again ā that's a new intent, generate a new key.
Common Mistakes I Made ā
Mistake #1: Scoping keys globally instead of per-user
// ā User A sends key "abc123" ā result cached globally
// User B guesses or observes key "abc123" ā gets User A's order data!
$cacheKey = "idempotency:{$key}";
// ā
Always scope by user
$cacheKey = "idempotency:{$userId}:{$key}";
Mistake #2: Caching 5xx responses
A timeout from your payment processor is a server error. Cache that response, and the next retry will get "payment failed" even after the processor recovered. Only cache 2xx and 4xx.
Mistake #3: Not validating that the key matches the request body
If someone reuses the same key with a different request body, should you return the original response or reject it? Stripe rejects it (returns 422 "Idempotency-Key is already used"). We do the same ā a key is tied to one specific payload hash.
// Store payload hash alongside the response
$payloadHash = hash('sha256', $request->getContent());
Cache::put($cacheKey, [
'body' => $response->getData(true),
'status' => $response->getStatusCode(),
'payload_hash' => $payloadHash,
], now()->addHours(24));
// On replay, verify it's the same payload
if ($cached['payload_hash'] !== hash('sha256', $request->getContent())) {
return response()->json(
['error' => 'Idempotency-Key already used for a different request body'],
422
);
}
When Does Idempotency Matter Most? āļø
| Operation | Idempotency Needed? | Why |
|---|---|---|
| Payment / charge | Critical | Money. Need I say more? |
| Order creation | Critical | Duplicate orders = refund nightmare |
| Email sends | High | Getting the same email 3x is annoying |
| Inventory reservation | High | Double-reserving = overselling |
| User registration | Medium | Usually caught by unique constraints |
| Analytics events | Low | Duplicate events are mostly harmless |
| GET requests | None | Already idempotent by nature |
TL;DR šÆ
The Problem:
User double-clicks ā two POST requests ā two charges ā angry customer
Network retry ā same POST twice ā duplicate order ā audit nightmare
The Solution (Idempotency Keys):
Client generates UUID once per "intent"
Client sends UUID as Idempotency-Key header on every attempt
Server checks: have I seen this key before?
ā Yes: return cached response, skip processing ā
ā No: process normally, store result with key
Double-clicks and retries become harmless
Implementation checklist:
ā
Generate key on client (UUID v4), reuse across retries
ā
Server checks cache before processing
ā
Use distributed lock for concurrent duplicate requests (Redis NX)
ā
Cache 2xx and 4xx, never 5xx (5xx may be transient)
ā
Scope keys per user (not globally)
ā
Validate payload hash on replay (optional but safe)
ā
TTL of 24h covers all realistic retry windows
What idempotency is NOT:
ā A substitute for transactions (still use those inside the handler)
ā A deduplication system (different users can have same cart, different keys)
ā A replacement for proper error handling
As a Technical Lead, I've learned that distributed systems have three certainties: death, taxes, and network retries. You can't stop users from double-clicking or clients from retrying on timeout. What you can control is whether those retries cause duplicate charges or just return a politely cached "yes, we already did that."
34 duplicate orders on Black Friday taught me that lesson better than any architecture book ever could.
Built idempotency into an API and have war stories? Find me on LinkedIn ā especially if you have a story about the client team generating a new UUID on every retry. Those are my favorite 2 AM debugging sessions.
Want to see the full middleware implementation including payload hashing and audit logs? Check out GitHub.
Make your APIs make peace with retries. š
P.S. Stripe's idempotency key docs are genuinely the best in the industry. If you want to see idempotency done right at scale, read them ā then copy shamelessly. We did. ā