Tag Archives: log structure


Every database has a transaction log that records all transactions and the database modifications made by each transaction

The transaction log is a critical component of the database. If there is a system failure, you will need that log to bring your database back to a consistent state.

How that works? What steps SQL Server do to keep the consistent state?

Taking a example I want to update 10 rows in my table and those rows are using 2 pages, let’s see step by step SQL Server does.

  1. Read the pages from buffer pool, if the pages isn’t there SQL Server will get that data from disk and put in memory.
  2. Start the lock process, SQL Server will acquire intent-exclusive lock for the table and pages, update row-level locks.
  3. After all rows to be updates are locked, it’s time to convert the update lock to exclusive lock and make the change to the page in memory. This process will create a log record describing the changes were made.
  4. Next the transaction is ready to commit. This step will write the log records to the transaction log on disk and will acknowledge the commit to the user.

The changed data still in the buffer pool and written in the transaction log on disk.  The data will stay in memory util a checkpoint runs and write the changes to the data files.


So, that’s why transaction logs are critical and we have to take care of it.

How to manage the transaction log

How to read transaction logs


How Update works?

Update data in SQL Server is a simple task. Using the command update table set column = value where column = value. Don’t forget the where clause :).

But, what SQL Server does internally? Delete plus Insert? Modify? Well, the answer depends.

Let’s see examples how that works.

First, create a database and table for the test.


As you can see, I didn’t create an index so I will update a heap table. Before update the row, I will drop the log register and then update the row I want.


Ok, now the result of the DBCC LOG


As we can see, the result is a modified row in the log, what about using an index? Creating a nonclustered index in the column I’m updating.



The result is very different, now we can see a modified row in the heap and in the nonclustered index I got a delete/insert row. Very nice!

What if was a clustered index?


I dropped the nonclustered index and created a clustered and result is:


Again, only the modified row in the index clustered. And, if we try to update our key in the clustered index?



I got the result with delete/insert as I was expecting because I changed the key like the nonclustered index. Now, if I try to update the row with the same value, what SQL Server will do?


Nothing, no change at all.