Error Message
ERROR: current transaction is aborted, commands ignored until end of transaction block
SQLSTATE: 25P02You will see this message after a previous statement in the same transaction failed and the transaction was not explicitly rolled back. Every subsequent query in that transaction returns 25P02 until you issue ROLLBACK (or ROLLBACK TO SAVEPOINT).
What Triggers This Error
PostgreSQL puts a transaction into the aborted state as soon as any statement inside it raises an error. Unlike MySQL or SQL Server — which silently roll back the failing statement and let the transaction continue — Postgres requires the application to explicitly clean up. Until ROLLBACK is sent, every new query is rejected with 25P02. The original failure is the root cause; 25P02 is just the visible symptom afterwards. The most common triggers:
- Failed
INSERTdue to constraint violation, then a follow-upSELECT— the second statement runs in the aborted transaction and returns 25P02 instead of data - ORM or middleware catches the exception but does not
ROLLBACK— Django, Rails, SQLAlchemy and others don't auto-rollback unless you exit their transaction manager properly - Long transaction hits a
CHECKconstraint or trigger error mid-flight — earlier work succeeded, but now everything after the failure is blocked - No
SAVEPOINTaround a recoverable statement — the application wanted to swallow the error and keep going, but the whole transaction is poisoned - Connection pooler hands out a connection still in an aborted transaction — the next request starts on a poisoned connection and 25P02s on its first query
Fix by Scenario
Failed INSERT followed by SELECT
The classic example. A unique-key violation in the INSERT aborts the transaction, but the application keeps issuing queries on the same connection.
BEGIN;
INSERT INTO users (id, email) VALUES (1, '[email protected]');
-- ERROR: duplicate key value violates unique constraint "users_pkey"
SELECT * FROM users WHERE id = 1;
-- ERROR: current transaction is aborted, commands ignored until end of transaction blockFix: Issue ROLLBACK (or COMMIT, which also clears the aborted state) before running anything else.
ROLLBACK;
SELECT * FROM users WHERE id = 1; -- now worksIn application code, the equivalent pattern is to wrap the work in a transaction context that auto-rollbacks on exception:
import psycopg2
with psycopg2.connect(dsn) as conn:
with conn.cursor() as cur:
try:
cur.execute("INSERT INTO users (id, email) VALUES (%s, %s)", (1, '[email protected]'))
except psycopg2.errors.UniqueViolation:
conn.rollback() # explicit, clears the aborted state
# now safe to continue using `conn` for new statements
cur.execute("SELECT * FROM users WHERE id = %s", (1,))The with conn: block on its own handles rollback only if an exception escapes the block. If you catch the exception inside, you must call conn.rollback() yourself.
ORM or middleware swallows the exception
Frameworks that wrap the database connection often hide the rollback responsibility. A try/except that catches the exception and continues will leave the transaction aborted.
# Django — anti-pattern
from django.db import transaction
with transaction.atomic():
try:
User.objects.create(email='[email protected]')
except IntegrityError:
# Bug: the transaction is still aborted; the next query inside this
# `atomic` block will raise 25P02 (TransactionManagementError in Django)
pass
# raises TransactionManagementError — the atomic block is poisoned
User.objects.filter(active=True).count()Fix: Use a nested atomic block (which Django implements as a SAVEPOINT) so the inner failure rolls back to the savepoint instead of poisoning the outer transaction:
with transaction.atomic(): # outer transaction
try:
with transaction.atomic(): # inner SAVEPOINT
User.objects.create(email='[email protected]')
except IntegrityError:
pass # SAVEPOINT rolled back automatically
# outer transaction still healthy
User.objects.filter(active=True).count()The same pattern works in SQLAlchemy with session.begin_nested() and in Rails with transaction(requires_new: true).
Long transaction hits a CHECK constraint or trigger error
A migration or batch job runs many statements in a single transaction. One statement deep into the work fails, and every subsequent statement is now blocked.
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1; -- ok
UPDATE accounts SET balance = balance + 100 WHERE id = 2; -- ok
UPDATE accounts SET balance = -50 WHERE id = 3;
-- ERROR: new row for relation "accounts" violates check constraint "balance_nonneg"
UPDATE accounts SET status = 'reconciled' WHERE id = 1;
-- ERROR: current transaction is aborted, commands ignored until end of transaction blockFix: Decide whether the batch is all-or-nothing or partial-progress acceptable.
For all-or-nothing (most batches): catch the error in the application and ROLLBACK the whole transaction, then surface the failure for retry or manual intervention.
For partial-progress: wrap each item in a SAVEPOINT so the failure of one item only rolls back that item.
BEGIN;
SAVEPOINT item_1;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
RELEASE SAVEPOINT item_1;
SAVEPOINT item_3;
UPDATE accounts SET balance = -50 WHERE id = 3;
-- ERROR: violates check constraint
ROLLBACK TO SAVEPOINT item_3;
-- transaction is healthy again; can keep going
SAVEPOINT item_4;
UPDATE accounts SET status = 'reconciled' WHERE id = 1; -- works
RELEASE SAVEPOINT item_4;
COMMIT;Savepoints have a per-savepoint cost — don't wrap each row in its own savepoint for a million-row batch. Group them in chunks of 100–1000 instead.
Connection pooler returns a poisoned connection
A connection that ended in an aborted state and was returned to the pool without ROLLBACK will fail the next request that picks it up.
[request 1] BEGIN; ... INSERT fails ...; (connection returned to pool)
[request 2] picks up same connection: SELECT ... ;
-> ERROR: current transaction is aborted
Fix: Configure the pool to issue DISCARD ALL (or at minimum ROLLBACK) on connection return, or use a pooler that does this automatically:
- PgBouncer in
transactionorsessionmode — runs the configuredserver_reset_query(defaultDISCARD ALL) on connection return - psycopg2 pool — call
putconn(conn, close=True)if you suspect the connection is in a bad state, or callconn.rollback()before returning it - Application code — always wrap database work in a
try/finallythat callsrollback()on exception:
conn = pool.getconn()
try:
# ... work ...
conn.commit()
except Exception:
conn.rollback()
raise
finally:
pool.putconn(conn)Schema migration leaves a transaction aborted
A schema change tool runs DDL inside a transaction and a single ALTER TABLE fails (for example, a NOT NULL add on a column with NULLs). The migration tool catches the error but tries to record progress in a tracking table — and that record write fails with 25P02 because the transaction is still poisoned.
BEGIN;
ALTER TABLE orders ADD COLUMN customer_id BIGINT NOT NULL;
-- ERROR: column "customer_id" contains null values
INSERT INTO schema_migrations (version, applied_at) VALUES ('20260427_add_customer_id', NOW());
-- ERROR: current transaction is aborted, commands ignored until end of transaction blockFix: Roll back first, then write the failure record in a fresh transaction (or use a tool that does this for you):
ROLLBACK;
BEGIN;
INSERT INTO schema_migrations_failures (version, error, failed_at)
VALUES ('20260427_add_customer_id', 'column contains null values', NOW());
COMMIT;Better, structure migrations so each statement is reviewed for null safety before deployment. Bytebase's SQL Review flags adding NOT NULL columns without defaults during change review, catching this class of failure before the migration runs.
Prevention
- Use a transaction context manager that rolls back on exception (
with conn:in psycopg2,transaction.atomic()in Django,session.begin()in SQLAlchemy) — never bareBEGIN/COMMITpairs in long-lived application code - Wrap recoverable statements in
SAVEPOINTblocks when you intentionally want to swallow an error and continue - Configure connection pools to run
DISCARD ALL(PgBouncer) orROLLBACKon connection return, so a poisoned connection cannot leak into the next request - For batch jobs, decide up front whether failure should abort the batch or continue with savepoints — and document the decision in the script
- Log the original error, not 25P02 — 25P02 is just the trailing symptom; the root cause is whatever statement failed before it
- Avoid mixing DDL and DML in the same transaction in production migrations — DDL failure modes (constraint violations on existing data, lock timeouts) are the most common 25P02 source
Bytebase's SQL Review catches the most common pre-migration causes of 25P02 — adding NOT NULL without defaults, foreign keys without indexes, and check constraints on tables with violating rows — before the change is deployed. See also ERROR 40001: Could Not Serialize Access for a different transaction failure that requires retry rather than rollback.