ERROR 40001: Could Not Serialize Access in Postgres
Error Message
ERROR: could not serialize access due to concurrent update
SQLSTATE: 40001Other common variations:
ERROR: could not serialize access due to read/write dependencies among transactions
DETAIL: Reason code: Canceled on identification as a pivot, during conflict out checking.
HINT: The transaction might succeed if retried.
ERROR: could not serialize access due to concurrent deleteWhat Triggers This Error
PostgreSQL 40001 fires when a transaction running at REPEATABLE READ or SERIALIZABLE isolation level detects a conflict that would violate snapshot consistency. Unlike a deadlock (ERROR 40P01), there is no circular wait â Postgres simply refuses to commit because another transaction already changed the data your transaction read. The fix is almost always to retry the transaction, but the root cause varies:
- Concurrent update under
REPEATABLE READâ two transactions update overlapping rows, and the second one can't reconcile with the first's committed snapshot - Read-write conflict under
SERIALIZABLEâ the Serializable Snapshot Isolation (SSI) detector finds a dangerous dependency cycle - Long-running report transaction conflicting with OLTP writes â a long
REPEATABLE READread-only transaction becomes a "pivot" that SSI aborts - Missing application-level retry logic â the transaction may succeed if retried, but the code treats 40001 as a fatal error instead of using bounded retries with backoff/jitter
- Isolation level mismatch â application sets
SERIALIZABLEin the session but the developer expectedREAD COMMITTEDsemantics
Fix by Scenario
Concurrent update under REPEATABLE READ
The most common cause. Two transactions read the same row, each tries to update it based on what they read, and Postgres cancels the second one when it commits.
-- Transaction A
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 1; -- reads 100
-- ... application logic ...
UPDATE accounts SET balance = balance - 10 WHERE id = 1;
COMMIT; -- succeeds
-- Transaction B (started before A committed)
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT balance FROM accounts WHERE id = 1; -- also reads 100
UPDATE accounts SET balance = balance - 20 WHERE id = 1;
-- ERROR: could not serialize access due to concurrent updateFix: Retry the transaction in application code with bounded retries and exponential backoff plus jitter. A retry may succeed once the conflicting transaction has committed, but under heavy contention it can fail repeatedly â always cap retry attempts and surface a clear error to the caller when the cap is hit:
import psycopg2
from psycopg2 import errors
import time
import random
max_retries = 3
for attempt in range(max_retries):
try:
with conn:
conn.set_session(isolation_level='REPEATABLE READ')
cur = conn.cursor()
cur.execute("SELECT balance FROM accounts WHERE id = %s", (1,))
balance = cur.fetchone()[0]
cur.execute("UPDATE accounts SET balance = %s WHERE id = %s",
(balance - 10, 1))
break
except errors.SerializationFailure:
conn.rollback()
if attempt == max_retries - 1:
raise
# Exponential backoff with jitter
time.sleep((0.1 * (2 ** attempt)) + random.uniform(0, 0.05))The retry block must re-read the input data â re-running the same UPDATE statements without re-selecting will use stale values.
Read-write conflict under SERIALIZABLE
SERIALIZABLE isolation uses Serializable Snapshot Isolation (SSI), which tracks read-write dependencies between transactions. When it detects a dependency cycle that would violate serial execution, it aborts one of the transactions with 40001.
-- Transaction A: reads user count, then inserts
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT COUNT(*) FROM users WHERE org_id = 5; -- reads 10
INSERT INTO users (org_id, name) VALUES (5, 'alice');
COMMIT;
-- Transaction B (concurrent): does the same thing
BEGIN ISOLATION LEVEL SERIALIZABLE;
SELECT COUNT(*) FROM users WHERE org_id = 5; -- also reads 10
INSERT INTO users (org_id, name) VALUES (5, 'bob');
COMMIT;
-- ERROR: could not serialize access due to read/write dependencies among transactionsBoth transactions saw the same count, so committing both would let them each think they were the 11th user â violating serializability.
Fix: The same retry pattern as above. Additionally, reduce SSI contention:
- Use predicate indexes on columns in WHERE clauses â SSI tracks predicates more precisely with an index than with a sequential scan
- Mark truly read-only transactions as
READ ONLY DEFERRABLEâ these never abort with 40001:
BEGIN ISOLATION LEVEL SERIALIZABLE READ ONLY DEFERRABLE;
-- Postgres waits until it finds a safe snapshot, then runs without SSI overhead
SELECT ...;
COMMIT;Long-running report transaction conflicting with OLTP writes
A 10-minute analytics query at REPEATABLE READ or SERIALIZABLE holds its snapshot open the entire time. Every write that touches the queried rows becomes a potential conflict target.
-- Reporting session
BEGIN ISOLATION LEVEL REPEATABLE READ;
SELECT complex_aggregation FROM huge_table WHERE ...;
-- 10 minutes of processing...
COMMIT;
-- Meanwhile, OLTP writes get aborted:
-- ERROR: could not serialize access due to concurrent updateFix:
- Run reports at
READ COMMITTEDif snapshot consistency is not required - For
SERIALIZABLE, useREAD ONLY DEFERRABLE(shown above) - Move heavy reporting to a read replica so OLTP and analytics don't share a snapshot timeline
- Split long transactions into smaller chunks with separate snapshots
Missing application-level retry logic
If your ORM or framework bubbles up 40001 as a generic database error without retry, every concurrent update looks like an outage.
# Django example
from django.db import transaction, OperationalError
def transfer(from_id, to_id, amount):
for attempt in range(3):
try:
with transaction.atomic(isolation_level='REPEATABLE READ'):
from_acc = Account.objects.select_for_update().get(id=from_id)
to_acc = Account.objects.select_for_update().get(id=to_id)
from_acc.balance -= amount
to_acc.balance += amount
from_acc.save()
to_acc.save()
return
except OperationalError as e:
if 'could not serialize' not in str(e):
raise
if attempt == 2:
raiseFix: Wrap every REPEATABLE READ or SERIALIZABLE transaction in retry logic at the application layer. SQLAlchemy, Django, and most ORMs do NOT do this by default.
Isolation level mismatch
Sometimes a transaction is unintentionally elevated to REPEATABLE READ or SERIALIZABLE by connection pooler defaults or session settings.
-- Check session default
SHOW default_transaction_isolation;
-- Check a specific session's setting
SELECT current_setting('transaction_isolation');Fix: Explicitly declare isolation level per transaction. Use READ COMMITTED unless you actually need snapshot consistency â it has no 40001 exposure:
BEGIN ISOLATION LEVEL READ COMMITTED;
-- ... work ...
COMMIT;Or set it via the connection URL:
postgresql://user:pass@host/db?options=-c%20default_transaction_isolation%3Dread%20committed
Prevention
- Add retry logic (with exponential backoff) for every transaction running at
REPEATABLE READorSERIALIZABLEâ 40001 is expected, not exceptional - Keep transactions short: read, update, commit. Avoid user input or external API calls mid-transaction
- Use
READ COMMITTEDby default; elevate only where snapshot consistency is required (financial transfers, audit-safe reads, inventory decrements) - Mark read-only transactions as
READ ONLYorREAD ONLY DEFERRABLEto reduce SSI overhead - Add indexes on columns used in WHERE predicates of serializable transactions â helps SSI track conflicts precisely
- Monitor
pg_stat_database.xact_rollbackfor rising rollback rates, which often indicate 40001 storms - Move reporting workloads to a read replica so long snapshots don't conflict with OLTP
Bytebase's SQL Review can flag long-running statements and missing index coverage during change review, reducing the likelihood of serialization failures in production. See also ERROR 40P01: Deadlock Detected for a different class of concurrency error with the same retry pattern.