Tag Archives: transaction log

Memory-optimized Logging

In a previous post I talked about transaction log works, and what about using a memory-optimized table?

SQL Server has the feature Memory-Optimized Objects to improve performance. In-memory nonclustered indexes are implemented using a data structure called a Bw-Tree. A Bw-Tree is a lock and latch-free variation of a B-Tree.

In-memory architecture:inmemory

To enable an application to use In-Memory OLTP, you need to complete the following tasks:

  • Create a memory-optimized data filegroup and add a container to the filegroup.
  • Create memory-optimized tables and indexes.
  • Load data into the memory-optimized table and update statistics after loading the data and before creating the compiled stored procedures.
  • Create natively compiled stored procedures to access data in memory-optimized tables. You can also use a traditional, interpreted Transact-SQL to access data in memory-optimized tables.

  • As needed, migrate data from existing tables to memory-optimized tables.

In this new architecture, let’s see what happens in the logging level.

I created a table using the code:

CREATE TABLE InMemoryTable (

As you can see, the first part is equal to a normal table and to create the in-memory table we use MEMORY_OPTIMIZED and the DURABILITY. In this case, I’d like you to pay attention to the DURABILITY configured to SCHEMA_ONLY.

Let’s take a look in the transaction log after create the table.inmemotable

We can see all the sys changes to create the table. Now, inserting a row in this table and see the log again:

INSERT INTO InMemoryTable VALUES('Douglas Correa')


You can see nothing changed, but where’s my data? The data is there in the table but only in memory. As you can imagine, both the logging and saving the data to disk are expensive operations.

That means with DURABILITY schema_only the data won’t be there after a crash or restart the server. Changing that for SCHEMA_AND_DATA and look in the log file we are going to see the log operation when inserting data.


As you can see, the log operation is LOP_HK, the row is in the log in case of crash SQL Server can redo.


The memory-optimized table is fast and can improve performance especially if don’t need to save the data, but there are limitations and one of limitation I didn’t like was I can’t detach and attach the database recreating the log file.



Every database has a transaction log that records all transactions and the database modifications made by each transaction

The transaction log is a critical component of the database. If there is a system failure, you will need that log to bring your database back to a consistent state.

How that works? What steps SQL Server do to keep the consistent state?

Taking a example I want to update 10 rows in my table and those rows are using 2 pages, let’s see step by step SQL Server does.

  1. Read the pages from buffer pool, if the pages isn’t there SQL Server will get that data from disk and put in memory.
  2. Start the lock process, SQL Server will acquire intent-exclusive lock for the table and pages, update row-level locks.
  3. After all rows to be updates are locked, it’s time to convert the update lock to exclusive lock and make the change to the page in memory. This process will create a log record describing the changes were made.
  4. Next the transaction is ready to commit. This step will write the log records to the transaction log on disk and will acknowledge the commit to the user.

The changed data still in the buffer pool and written in the transaction log on disk.  The data will stay in memory util a checkpoint runs and write the changes to the data files.


So, that’s why transaction logs are critical and we have to take care of it.

How to manage the transaction log

How to read transaction logs