SQL, deeper.
Spring's @Transactional, Hibernate's cascades, JPA's optimistic locking — none of these make sense without first understanding what a transaction is and why isolation levels exist. Today fills in that exact gap.
@Transactional sits on top of.Transactions — the unit of "all or nothing"
A transaction is a group of SQL statements that either all succeed or all fail together. There is no middle.
Classic example — money transfer between accounts:
BEGIN;
UPDATE accounts SET balance = balance - 100 WHERE id = 1;
UPDATE accounts SET balance = balance + 100 WHERE id = 2;
COMMIT;If the second UPDATE fails (network hiccup, constraint, anything), COMMIT never runs and a ROLLBACK reverses the first UPDATE. Money never disappears. Without transactions, that's exactly what could happen — debit one account and never credit the other.
ACID — what transactions guarantee
| Property | Meaning | Concrete failure prevented |
|---|---|---|
| Atomicity | All statements commit, or none do | Half-applied money transfer |
| Consistency | Constraints (FK, unique) hold before and after | Order pointing to deleted user |
| Isolation | Concurrent transactions don't see each other's mid-flight changes | Reading a balance that's about to roll back |
| Durability | Once COMMIT returns, data survives crashes | Successful order vanishing on power loss |
If asked "what does ACID mean," focus on Atomicity and Isolation — they cause 90% of real-world transaction discussions. Consistency and Durability are largely automatic; isolation is where bugs and design choices live.
The four concurrency anomalies
When multiple transactions run at the same time, four things can go wrong. Each isolation level (next section) blocks some of them.
Dirty read
- Tx-A reads a value Tx-B has written but not committed
- Tx-B rolls back → Tx-A acted on data that never existed
- Worst kind. Most databases prevent this by default.
Non-repeatable read
- Tx-A reads row X, gets value 100
- Tx-B updates X to 200 and commits
- Tx-A reads X again → gets 200
- Same query, different result, in the same transaction
Phantom read
- Tx-A runs
SELECT COUNT(*) WHERE x = 1, gets 5 - Tx-B inserts a new matching row, commits
- Tx-A runs the same SELECT, now gets 6
- New rows "appearing" — like phantoms
Lost update
- Tx-A reads counter = 5
- Tx-B reads counter = 5
- Tx-A writes 6, commits
- Tx-B writes 6, commits — A's update is lost
Isolation levels — your knob
SQL defines four isolation levels. Higher = more protection, but also more locking and slower throughput.
| Level | Dirty read | Non-repeatable | Phantom |
|---|---|---|---|
| READ UNCOMMITTED | possible | possible | possible |
| READ COMMITTED | prevented | possible | possible |
| REPEATABLE READ | prevented | prevented | possible* |
| SERIALIZABLE | prevented | prevented | prevented |
* Postgres' implementation prevents phantoms even at REPEATABLE READ. MySQL's InnoDB also prevents most phantoms via gap locks. The classical rule applies to ANSI SQL.
Defaults you'll meet in real life
- Postgres / Oracle / SQL Server default to READ COMMITTED
- MySQL InnoDB defaults to REPEATABLE READ
For most CRUD apps, the default is fine. Reach for SERIALIZABLE only when you have specific concurrency invariants to protect — and accept the throughput cost.
Pessimistic vs optimistic locking
Pessimistic locking
SELECT ... FOR UPDATE— locks rows so no one else can modify until you commit- Other transactions block, waiting
- Safe but expensive; risk of deadlocks
- Use when: contention is high, conflicts are likely
Optimistic locking
- Each row has a
versioncolumn - UPDATE includes
WHERE version = ?with the value you read - If someone else updated first, your UPDATE affects 0 rows → retry
- Cheap, no blocking; you handle the retry
- JPA's
@Versionannotation does exactly this
How Spring's @Transactional uses all of this
You'll see this in Week 4 Day 26 in detail. Preview:
@Transactional(isolation = Isolation.READ_COMMITTED)
public void transferMoney(Long fromId, Long toId, BigDecimal amount) {
Account from = accountRepo.findById(fromId).orElseThrow();
Account to = accountRepo.findById(toId).orElseThrow();
from.debit(amount);
to.credit(amount);
accountRepo.save(from);
accountRepo.save(to);
}Spring opens a transaction before the method runs, commits if it returns normally, rolls back if it throws an unchecked exception. The isolation level passes straight through to the database. Spring isn't doing anything magical — it's calling BEGIN / COMMIT / ROLLBACK for you.
Cascades — what happens when a parent goes
You said you want clarity on cascades. Here it is.
When a row is referenced by a foreign key in another table, deleting it raises a question: what should happen to the dependent rows? SQL gives you several answers, declared on the foreign key:
| Cascade rule | What happens on parent DELETE |
|---|---|
RESTRICT (default-ish) | Reject the DELETE if children exist |
CASCADE | Delete the children too |
SET NULL | Set the FK column on children to NULL |
SET DEFAULT | Set the FK column to its DEFAULT |
NO ACTION | Like RESTRICT but checked at end of transaction |
CREATE TABLE orders (
id SERIAL PRIMARY KEY,
user_id INTEGER REFERENCES users(id) ON DELETE CASCADE,
total NUMERIC(10,2)
);
-- DELETE FROM users WHERE id = 1;
-- → also deletes all orders WHERE user_id = 1CASCADE is convenient but dangerous in production data — one DELETE can wipe out related rows you didn't expect. Many teams prefer RESTRICT + explicit cleanup in service code, where the deletion is auditable and reviewable. Your call, but understand the trade.
JPA cascades vs. database cascades — different things!
This trips up everyone. JPA has its own cascade attribute on @OneToMany / @ManyToOne. That governs what JPA does in Java when you save / remove an entity — separate from the database FK rule.
Two layers, two settings. We'll fully untangle this in Week 4 (JPA day). For now: they're not the same thing.
Normalization — the brief version
Database normalization is a design discipline that eliminates redundant data by splitting one fat table into many small related ones.
❌ Denormalized
orders
---
id, user_email, user_name,
user_address, item_name,
price, qtyUpdate Alice's address → must update every row of hers.
✅ Normalized
users (id, email, name, address)
items (id, name, price)
orders (id, user_id, item_id, qty)One source of truth per fact.
The "forms" simplified
- 1NF — each cell is a single value (no comma-separated lists in columns)
- 2NF — every non-key column depends on the whole primary key
- 3NF — non-key columns depend only on the key, nothing else
For 95% of business apps, aiming for 3NF is the sweet spot. Higher forms (BCNF, 4NF) are for theoretical edge cases.
Sometimes you intentionally duplicate data for read performance — denormalization. Common in analytics, dashboards, write-rare/read-many systems. The trade is you must keep duplicates in sync. Don't denormalize until measured pain forces it.
Deadlocks — when two transactions block each other
The fix is simple but surprising: always acquire locks in the same order across your codebase. If every code path locks "smaller id first," there can be no cycle.
Lock in today's learning
Take time. These are the questions interviewers ask repeatedly.
- What does ACID stand for? Which one matters most for daily Spring work?
- Define dirty read, non-repeatable read, phantom read, and lost update.
- Most common default isolation level? (Postgres / MySQL)
- What's the difference between pessimistic and optimistic locking?
- Difference between a database CASCADE and a JPA cascade?
- What causes a deadlock, and what's the standard prevention?
End of Day 10. Tomorrow: Raw JDBC — one full hands-on day. Feel the boilerplate Spring saves you from.