eFakturuj / API Docs
eFakturuj Guides

Idempotency

Idempotency means a client can send the same logical request more than once — because the network dropped the first reply, because a worker crashed, because a queue redelivered — without creating duplicate side-effects on the server. This page tells you what eFakturuj does for you today and what you have to do yourself.

Current backend posture

The eFakturuj API does not honour an Idempotency-Key HTTP header today. A grep across backend/app/ finds idempotency only in internal Peppol processing (looking up inbound MLS / TDD rows by their external transmission identifiers); there is no request-level dedup middleware and no per-route handler for an Idempotency-Key header on the public Connect API. Sending the same POST /invoices body twice will create two invoice rows.

invoice_number is indexed but not uniquely constrained at the database level (backend/app/models/invoice.py), so it cannot be relied on as a server-side natural dedup key either — calling POST /invoices twice with the same invoice_number succeeds twice. You have to dedupe on your side.

We plan to add an Idempotency-Key-aware middleware in a later release. Until then, treat POST requests as at-most-once on the server and design your client for the gap.

Client-side patterns

These are the patterns we recommend regardless of whether the server ever ships dedup.

1. Generate a stable key per logical request

When you decide to send an invoice, generate a UUID and pin it to that intent, not to the HTTP attempt. If the request fails and you retry, reuse the same UUID. Persist it next to the row in your own database (external_request_id or similar) so even a process restart doesn't lose it.

import uuid
request_id = str(uuid.uuid4())  # stored alongside the local invoice row
# ... reuse `request_id` across retries

When the server-side header support lands, the same UUID becomes the value of the Idempotency-Key header — no client change beyond adding the header.

2. Retry only on network / timeout / 5xx — never on 4xx

OutcomeRetry?
Connection refused, DNS, TLS, timeoutYes — request never reached the server (or no reply).
5xx (incl. 502 peppol_delivery_failed)Yes — server failed to act, your call was well-formed.
429Yes — but only after Retry-After seconds (see Errors).
4xx other than 429No — your input was rejected; retrying repeats the same rejection.
402 usage_limit_exceededNo — needs a plan upgrade, not a retry.

3. Exponential backoff with jitter

Naive immediate retries on a slow backend amplify the incident. Stagger them.

import random, time, httpx

def post_with_retry(client: httpx.Client, url: str, json: dict, request_id: str):
    headers = {"X-Request-Id": request_id}  # becomes Idempotency-Key once supported
    for attempt in range(5):
        try:
            r = client.post(url, json=json, headers=headers, timeout=30)
        except (httpx.ConnectError, httpx.ReadTimeout):
            pass  # retryable
        else:
            if r.status_code < 500 and r.status_code != 429:
                return r  # 2xx or non-retryable 4xx
            if r.status_code == 429:
                time.sleep(int(r.headers.get("Retry-After", "1")))
                continue
        # exponential backoff: 0.5, 1, 2, 4, 8 seconds, ±25% jitter
        delay = (2 ** attempt) * 0.5
        time.sleep(delay * (0.75 + random.random() * 0.5))
    raise RuntimeError(f"failed after retries: {request_id}")

4. Reconcile after the fact

Because the server doesn't dedupe, two retries can both succeed and create two invoice rows. Defend with a periodic reconciliation job:

  • Pull recent invoices via GET /invoices?from=<since>&q=<your invoice_number>
  • If you find duplicates with the same invoice_number for the same org/workspace, void the extras using the documented status transitions (sent invoices cannot be deleted — see the 409 case in Errors).

This is the price of running without server-side dedup. Keep retry counts low (≤ 5) and the reconcile window narrow.

5. Circuit-break

If the API returns 5xx for n consecutive attempts on the same org, stop calling it for a cool-off window — flooding a struggling backend extends the incident. A simple in-memory counter per organisation is fine; resetting on the first 2xx response is enough.

When server-side support lands

The forward-compatible call site looks like this today:

headers = {"X-Request-Id": request_id}
# Once the server respects Idempotency-Key, also add:
# headers["Idempotency-Key"] = request_id

Watch the Changelog for the announcement; the contract will document scope (per API key vs per org), the dedup window TTL, and replay semantics on key collision.