Loading Now

Why Isolation Levels Matter More Than Most Developers Think (SQL Server, Oracle, PostgreSQL and MySQL)

One of the Most Misunderstood Topics in Databases

After more than 25 years working with relational databases — SQL Server, Oracle, PostgreSQL and MySQL — I can say something that surprises many developers:

One of the most important concepts in databases is also one of the least understood.

That concept is transaction isolation levels.

Most developers only notice isolation levels when something goes wrong:

  • queries suddenly start blocking
  • deadlocks appear
  • transactions become inconsistent
  • reports return different results each time
  • systems become slow under high concurrency

At that moment, people start asking:

Why are my SELECT queries blocked by UPDATE statements?

The answer almost always lies in the isolation level used by the database engine.

And the reality is that different database systems handle concurrency in very different ways.


What Isolation Levels Actually Control

Isolation levels determine how transactions interact with each other.

More specifically, they control:

  • whether reads can see uncommitted data
  • whether reads block writes
  • whether writes block reads
  • whether the same query can return different results inside the same transaction

Isolation levels exist to prevent classic concurrency problems such as:

  • Dirty Reads
  • Non-repeatable Reads
  • Phantom Reads

Different database systems implement these protections using different internal mechanisms.


The Big Difference: Optimistic vs Pessimistic Concurrency

There are two main strategies used by database engines:

Pessimistic Concurrency

Transactions use locks aggressively to prevent conflicts.

Optimistic Concurrency

Multiple versions of rows are stored, allowing reads and writes to occur without blocking.

Understanding which approach your database uses by default is critical.


Default Isolation Level by Database Engine

Oracle

Default Isolation Level:

READ COMMITTED

But Oracle implements it using optimistic concurrency.

Oracle relies heavily on data versioning.

When a query runs, it reads a consistent snapshot of the data using undo segments.

This means:

  • SELECT queries rarely block
  • readers do not block writers
  • writers do not block readers

This architecture makes Oracle extremely efficient in high concurrency systems.


PostgreSQL

Default Isolation Level:

READ COMMITTED

PostgreSQL also uses MVCC (Multi-Version Concurrency Control).

Each transaction sees a consistent version of the data based on transaction visibility.

This means:

  • SELECT statements do not block UPDATE
  • UPDATE does not block SELECT
  • high concurrency environments perform very well

MVCC is one of the reasons PostgreSQL scales efficiently for transactional workloads.


MySQL (InnoDB)

Default Isolation Level:

REPEATABLE READ

MySQL also uses MVCC, similar to PostgreSQL and Oracle.

With Repeatable Read, a transaction sees the same snapshot throughout its execution.

Benefits include:

  • consistent reads
  • minimal locking for read operations
  • strong concurrency behavior

However, MySQL may still use gap locks in some cases depending on the query.


SQL Server

Default Isolation Level:

READ COMMITTED

However, SQL Server historically uses a pessimistic locking model.

This means:

  • SELECT queries can be blocked by UPDATE
  • UPDATE queries can be blocked by SELECT
  • blocking chains can form under heavy concurrency

This behavior surprises developers coming from Oracle or PostgreSQL.


Real Problems This Creates in SQL Server

In high-concurrency environments, the default SQL Server behavior may lead to:

Blocking

Example:

A long-running UPDATE holds locks.

Meanwhile, SELECT queries attempting to read the same rows are blocked.

Users perceive the system as slow even though the server is not overloaded.


Deadlocks

Deadlocks occur when two transactions wait for resources locked by each other.

SQL Server detects this situation and chooses a victim transaction to terminate.

This is common in OLTP systems with many concurrent writes.


How to Reduce Blocking in SQL Server

One of the most powerful options available in SQL Server is enabling row versioning isolation levels.

These options bring SQL Server behavior closer to Oracle and PostgreSQL.


Read Committed Snapshot Isolation (RCSI)

RCSI allows SQL Server to use row versioning for read operations.

This eliminates many blocking situations between SELECT and UPDATE.

To enable it:

ALTER DATABASE [database_name] SET READ_COMMITTED_SNAPSHOT ON;

After enabling RCSI:

  • reads access versioned rows
  • readers no longer block writers
  • writers no longer block readers

Snapshot Isolation

Snapshot Isolation provides even stronger guarantees.

Transactions operate on a consistent snapshot of the database taken when the transaction begins.

To enable snapshot support:

ALTER DATABASE database_name SET ALLOW_SNAPSHOT_ISOLATION ON;

Important Considerations Before Enabling Snapshot Isolation

Enabling row versioning changes how SQL Server stores row versions.

These versions are stored in TempDB.

This means your TempDB configuration becomes extremely important.

Make sure:

  • TempDB has sufficient disk space
  • TempDB uses fast storage (preferably SSD)
  • multiple TempDB data files exist
  • TempDB contention is minimized

Poor TempDB configuration can create new bottlenecks.


Azure SQL Database Behavior

Azure SQL Database behaves differently from traditional SQL Server installations.

By default, Azure SQL Database already uses:

Read Committed Snapshot Isolation

This design decision significantly reduces blocking problems in cloud environments.

Azure also manages TempDB automatically depending on the service tier.


Changing Isolation Level at the Session Level

Sometimes you don’t want to change the behavior of the entire database.

Instead, you can modify the isolation level only for a specific session or stored procedure.

Below is an example approach that temporarily switches isolation level to SNAPSHOT and restores the previous setting afterward.

ALTER DATABASE NOMEDOBANCO SET ALLOW_SNAPSHOT_ISOLATION ON;CREATE PROCEDURE yourstoredprocedure ASDECLARE @originalIsolationLevel NVARCHAR(20);CREATE TABLE #UserOptions (OptionName NVARCHAR(128), Value NVARCHAR(128));INSERT INTO #UserOptionsEXEC ('DBCC USEROPTIONS');SELECT @originalIsolationLevel = ValueFROM #UserOptionsWHERE OptionName = 'isolation level';DROP TABLE #UserOptions;SET TRANSACTION ISOLATION LEVEL SNAPSHOT;BEGIN TRY -- Your SQL logic here IF @originalIsolationLevel = 'READ COMMITTED'
SET TRANSACTION ISOLATION LEVEL READ COMMITTED;END TRYBEGIN CATCH DECLARE @ErrorMessage NVARCHAR(4000); SET @ErrorMessage = ERROR_MESSAGE(); RAISERROR(@ErrorMessage, 16, 1);END CATCH

This approach is useful when dealing with specific blocking problems between SELECT and UPDATE statements.


Final Thoughts

Isolation levels are not just theoretical database concepts.

They directly influence:

  • system performance
  • concurrency behavior
  • blocking issues
  • deadlock frequency
  • query consistency

Understanding how each database engine implements concurrency is essential for designing scalable systems.

After decades working with databases, I can say confidently:

Many production performance problems are not caused by bad queries — they are caused by misunderstanding how transactions interact.

And isolation levels sit right at the center of that interaction.

🚀 Ready to boost your career in data?

👉 DBAcademy – DBA & Data Analyst Training
Over 1,300 lessons and 412 hours of exclusive content.
Includes subtitles in English, Spanish, and French.

🔗 https://filiado.wixsite.com/dbacademy

💡 Start learning today and become a highly in-demand data professional.

Share this content:

Sandro Servino is a senior IT professional with over 30 years of experience in technology, having worked as a Developer, Project Manager (acting as a Requirements Analyst and Scrum Master), Professor, IT Infrastructure Team Coordinator, IT Manager, and Database Administrator. He has been working with Database technologies since 1996 and has been vendor-certified since the early years of his career. Throughout his professional journey, he has combined deep technical expertise with leadership, education, and consulting experience in mission-critical environments. Sandro has trained more than 20,000 students in database technologies, helping professionals build strong foundations and advance their careers in data platforms and database administration. He has delivered corporate training programs for multiple companies and served as a university professor teaching Database and Data Administration for over five years. For many years, he worked as an independent consultant specializing in SQL Server, providing strategic and technical support for complex database environments. He has extensive experience in troubleshooting and resolving critical issues in SQL Server production environments, including performance tuning, high availability, disaster recovery, security, and infrastructure optimization. His academic background includes: Postgraduate Degree in School Education MBA in IT Governance Master’s Degree in Knowledge Management and Information Technology Currently, Sandro works as a Database Administrator for multinational companies in Europe, managing enterprise-level SQL Server environments and supporting large-scale, high-demand infrastructures. Areas of Expertise SQL Server (Administration, Performance, HA/DR, Troubleshooting) Azure SQL Databases MySQL Oracle PostgreSQL Power BI Data Analytics Data Warehouse Windows Server Oracle Linux Server Ubuntu Linux Server DBA Training and Mentorship Business Continuity and Disaster Recovery Strategies Courses and Training Programs Sandro delivers professional training programs focused on the formation of DBAs and Data/BI Analysts, covering: SQL Server and Azure SQL Databases MySQL Oracle PostgreSQL Power BI Data Analytics Data Warehouse Windows Server Oracle Linux Server Ubuntu Linux Server With a unique combination of technical depth, academic knowledge, real-world consulting experience, and international exposure, Sandro Servino brings practical, results-driven expertise to database professionals and organizations seeking reliability, performance, and resilience in their data platforms.

Post Comment