# ERROR 1364 (HY000): Field Doesn't Have a Default Value in MySQL

Source: https://www.bytebase.com/reference/mysql/error/1364-field-doesnt-have-a-default-value/

---

## Error Message

```sql
ERROR 1364 (HY000): Field 'column_name' doesn't have a default value
```

## What Triggers This Error

MySQL 1364 fires when an `INSERT` omits a `NOT NULL` column that has no `DEFAULT` clause, while `sql_mode` includes `STRICT_TRANS_TABLES` or `STRICT_ALL_TABLES`. Without strict mode, MySQL silently inserts an "implicit default" — `''` for strings, `0` for numbers, `0000-00-00` for dates — which is exactly the kind of silent corruption strict mode was designed to prevent. Like 1406, 1364 frequently surfaces after a server upgrade or `sql_mode` change rather than at the moment a buggy schema was deployed. The trigger varies:

- **`INSERT` omits a `NOT NULL` column with no `DEFAULT`** — the most common case
- **Adding a `NOT NULL` column to a table with existing rows** — MySQL needs to fill values for every row, fails if no `DEFAULT` and no per-row value provided
- **ORM serialization drops a field** — the application object has the field, but the ORM omits it from the `INSERT` because the value is `None`/`null`
- **`LOAD DATA INFILE` with mismatched column count** — the file has fewer columns than the table, and the missing columns lack defaults
- **Generated column or trigger references a column the new row doesn't supply** — implicit dependency that surfaces only on insert

## Fix by Scenario

### `INSERT` omits a `NOT NULL` column

The textbook case. The schema requires the column, the `INSERT` doesn't supply it, and there's no `DEFAULT` to fall back on.

```sql
CREATE TABLE users (
  id INT PRIMARY KEY,
  email VARCHAR(255) NOT NULL,
  username VARCHAR(50) NOT NULL,
  created_at TIMESTAMP NOT NULL
);

INSERT INTO users (id, email, username) VALUES (1, 'alice@example.com', 'alice');
-- ERROR 1364 (HY000): Field 'created_at' doesn't have a default value
```

**Fix:** Add a `DEFAULT` to the column for cases where a sensible default exists, or supply the value in every `INSERT`.

```sql
-- Option A: Add a DEFAULT (preferred for timestamps, status flags, etc.)
ALTER TABLE users
  MODIFY COLUMN created_at TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP;

-- Option B: Supply the value
INSERT INTO users (id, email, username, created_at)
VALUES (1, 'alice@example.com', 'alice', NOW());
```

Avoid the temptation to relax `sql_mode` — see "Don't disable strict mode" below.

### Adding `NOT NULL` to a table with existing rows

A migration that adds a `NOT NULL` column to a populated table fails for every existing row, not just new inserts.

```sql
ALTER TABLE orders ADD COLUMN customer_id BIGINT NOT NULL;
-- ERROR 1364 (HY000): Field 'customer_id' doesn't have a default value
```

**Fix:** Use the standard three-step pattern: add the column nullable, backfill it, then enforce `NOT NULL`.

```sql
-- Step 1: Add nullable
ALTER TABLE orders ADD COLUMN customer_id BIGINT NULL;

-- Step 2: Backfill (large tables: chunk this)
UPDATE orders o
JOIN order_legacy_map m ON o.id = m.order_id
SET o.customer_id = m.customer_id;

-- Step 3: Verify and enforce
SELECT COUNT(*) FROM orders WHERE customer_id IS NULL;  -- must be 0
ALTER TABLE orders MODIFY COLUMN customer_id BIGINT NOT NULL;
```

For tables large enough to lock-block writes, use `pt-online-schema-change` or `gh-ost` for the `ALTER` steps.

### ORM serialization drops the field

The application object has the field set, but the ORM serializes `None`/`null` as "field absent" rather than `NULL`, so the column is omitted from the `INSERT`. The schema rejects the absent column with 1364.

```python
# SQLAlchemy example — User model has `username = Column(String(50), nullable=False)`
user = User(id=1, email='alice@example.com', username=None)  # bug: should be a string
session.add(user)
session.commit()
# ERROR 1364 (HY000): Field 'username' doesn't have a default value
```

**Fix:** The 1364 message is misleading here — the real problem is that the application is trying to insert `NULL` into a `NOT NULL` column. The fix is at the application layer:

1. Validate the field is set before insert (model validators in Django, Pydantic, Marshmallow)
2. Make the column truly optional (`NULL` allowed) if `None` is a valid state
3. Add a server-side `DEFAULT` so absence becomes a meaningful value

```sql
-- If empty string is a valid placeholder
ALTER TABLE users MODIFY COLUMN username VARCHAR(50) NOT NULL DEFAULT '';
```

But empty-string defaults are usually a smell — they hide bugs. Prefer fixing the application or making the column nullable.

### `LOAD DATA INFILE` with mismatched columns

A bulk import file has fewer columns than the table, and the missing columns are `NOT NULL` without defaults. Strict mode rejects the entire load.

```sql
LOAD DATA INFILE '/tmp/users.csv'
INTO TABLE users
FIELDS TERMINATED BY ','
(id, email);  -- only 2 columns, but table has 4 NOT NULL columns
-- ERROR 1364 (HY000): Field 'username' doesn't have a default value
```

**Fix:** Either declare defaults for the missing columns, or supply them inline using the `SET` clause:

```sql
LOAD DATA INFILE '/tmp/users.csv'
INTO TABLE users
FIELDS TERMINATED BY ','
(id, email)
SET username = SUBSTRING_INDEX(email, '@', 1),
    created_at = NOW();
```

`SET` runs once per imported row and can compute values from the columns that were loaded. This is the cleanest pattern when the source file is a strict subset of the schema.

### Don't disable strict mode

A common but bad fix is to relax `sql_mode` to silence the error.

```sql
-- DON'T DO THIS in production
SET SESSION sql_mode = 'NO_ENGINE_SUBSTITUTION';
INSERT INTO users (id, email) VALUES (1, 'alice@example.com');
-- "succeeds" — but inserts '' into username and 0000-00-00 00:00:00 into created_at
```

Now the row exists with garbage data. Future queries on `username` or `created_at` return values that don't match what any user ever supplied — and the bug has propagated past the database into reports, analytics, and downstream systems.

**Why strict mode is the right default:**

- `STRICT_TRANS_TABLES` — rejects bad data on InnoDB (transactional) tables
- `STRICT_ALL_TABLES` — rejects on all storage engines, including MyISAM
- Both are enabled by default in MySQL 5.7+ and 8.0+

If you have legacy code that depends on implicit defaults, fix the code or add explicit defaults to the schema. Don't disable strict mode globally.

## Prevention

- Default to `STRICT_TRANS_TABLES` (or `STRICT_ALL_TABLES`) in `sql_mode` — they are the MySQL 5.7+/8.0+ defaults for a reason
- For every `NOT NULL` column, decide at schema creation whether a `DEFAULT` makes sense (`created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP`, `status ENUM(...) DEFAULT 'pending'`, etc.) — and add it explicitly
- When adding `NOT NULL` to existing tables, always use the three-step pattern: nullable add → backfill → enforce — never a single-step `ADD COLUMN ... NOT NULL`
- In application code, validate required fields at the model layer before they reach the database — fail fast with a clear error message instead of relying on 1364
- For `LOAD DATA INFILE`, use the `SET` clause to compute defaults from loaded columns when the file doesn't include every required field
- Audit `sql_mode` across environments — staging and production must match. A staging database without strict mode can hide 1364 bugs that explode in production

> **Note:** [Bytebase's SQL Review](https://docs.bytebase.com/sql-review/review-rules/) flags `ALTER TABLE ... ADD COLUMN ... NOT NULL` without a `DEFAULT` clause and `INSERT` statements that omit `NOT NULL` columns, catching 1364 causes during change review rather than at deploy time. See also [ERROR 1406: Data Too Long for Column](/reference/mysql/error/1406-data-too-long-for-column) for the sibling strict-mode error that catches overflow instead of absence.