Tag Archives: database

Memory-optimized Logging

In a previous post I talked about transaction log works, and what about using a memory-optimized table?

SQL Server has the feature Memory-Optimized Objects to improve performance. In-memory nonclustered indexes are implemented using a data structure called a Bw-Tree. A Bw-Tree is a lock and latch-free variation of a B-Tree.

In-memory architecture:inmemory

To enable an application to use In-Memory OLTP, you need to complete the following tasks:

  • Create a memory-optimized data filegroup and add a container to the filegroup.
  • Create memory-optimized tables and indexes.
  • Load data into the memory-optimized table and update statistics after loading the data and before creating the compiled stored procedures.
  • Create natively compiled stored procedures to access data in memory-optimized tables. You can also use a traditional, interpreted Transact-SQL to access data in memory-optimized tables.

  • As needed, migrate data from existing tables to memory-optimized tables.

In this new architecture, let’s see what happens in the logging level.

I created a table using the code:

CREATE TABLE InMemoryTable (

As you can see, the first part is equal to a normal table and to create the in-memory table we use MEMORY_OPTIMIZED and the DURABILITY. In this case, I’d like you to pay attention to the DURABILITY configured to SCHEMA_ONLY.

Let’s take a look in the transaction log after create the table.inmemotable

We can see all the sys changes to create the table. Now, inserting a row in this table and see the log again:

INSERT INTO InMemoryTable VALUES('Douglas Correa')


You can see nothing changed, but where’s my data? The data is there in the table but only in memory. As you can imagine, both the logging and saving the data to disk are expensive operations.

That means with DURABILITY schema_only the data won’t be there after a crash or restart the server. Changing that for SCHEMA_AND_DATA and look in the log file we are going to see the log operation when inserting data.


As you can see, the log operation is LOP_HK, the row is in the log in case of crash SQL Server can redo.


The memory-optimized table is fast and can improve performance especially if don’t need to save the data, but there are limitations and one of limitation I didn’t like was I can’t detach and attach the database recreating the log file.


What the Checkpoint does?

Checkpoints flush dirty data pages from the buffer cache of the current database to disk. This minimizes the active portion of the log that must be processed during a full recovery of a database. During a full recovery, the following types of actions are performed:

  • The log records of modifications not flushed to disk before the system stopped are rolled forward.
  • All modifications associated with incomplete transactions, such as transactions for which there is no COMMIT or ROLLBACK log record, are rolled back.

Checkpoints occur in the following situations:

  • A CHECKPOINT statement is explicitly executed. A checkpoint occurs in the current database for the connection.
  • A minimally logged operation is performed in the database; for example, a bulk-copy operation is performed on a database that is using the Bulk-Logged recovery model.
  • Database files have been added or removed by using ALTER DATABASE.
  • An instance of SQL Server is stopped by a SHUTDOWN statement or by stopping the SQL Server (MSSQLSERVER) service. Either action causes a checkpoint in each database in the instance of SQL Server.
  • An instance of SQL Server periodically generates automatic checkpoints in each database to reduce the time that the instance would take to recover the database.
  • A database backup is taken.
  • An activity requiring a database shutdown is performed. For example, AUTO_CLOSE is ON and the last user connection to the database is closed, or a database option change is made that requires a restart of the database.


VLF (Virtual Log Files)


To see how many VLFs you have solely look at the number of rows returned by DBCC LOGINFO.

The size and number of VLFs you’ll have depends largely on the size that the chunk is when it’s added to you transaction log.

There is no general rule how to determine the best values for the auto-growth option, as these vary from case to case. Having too many or too little virtual log files causes bad performance.

Having an excessive number of VLFs can negatively impact all transaction log related activities and you may even see degradation in performance when transaction log backups occur.

Most of the time excessive VLF fragmentation is brought about by excessive file growth at small intervals. For example, a database that is set to grow a transaction log file by 5mb at a time is going to have a large number of VLFs should the log decide to grow.

Growth Number of VLFs created
<= 64Mb 4
>64 but <=1Gb 8
>1Gb 16

There is insufficient system memory in resource pool

Doing crash and recovery tests on my local machine I got the SQL Server instance not going online. After trying the third time to bring my instance online thinking was something else problem I saw the SQL Server errolog file and I could see the problem.

Not enough memory, but wasn’t on my machine, was in the resource pool. So, what is a resource pool?

A resource pool represents a subset of the physical resources of an instance of the Database Engine and in my case was insufficient memory. Let’s see the errorlog file:

2018-03-14 16:19:58.09 spid56s     [ERROR] Recovery failed with error 0x83000000 on database 18. This error will be mapped to 'HK_E_RESTORE_INSUFFICIENT_MEMORY' (0x8200002e). (sql\ntdbms\hekaton\runtime\src\hkruntime.cpp : 4805 - 'HkRtRestoreDatabase')
2018-03-14 16:19:58.09 spid34s     [INFO] HkCkptCtrlUninitialize(): Database ID: [18]. Cleaning up StorageArray. LastClosedCheckpointEndTs: '158'
2018-03-14 16:19:58.09 Server      Error: 17300, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2018-03-14 16:19:58.09 Server      Error: 17312, Severity: 16, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2018-03-14 16:19:58.09 Server      Error: 28709, Severity: 16, State: 19. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2018-03-14 16:19:58.09 spid37s     Error: 701, Severity: 17, State: 137.
2018-03-14 16:19:58.09 spid37s     There is insufficient system memory in resource pool 'default' to run this query.
2018-03-14 16:19:58.09 spid39s     Error: 701, Severity: 17, State: 137.
2018-03-14 16:19:58.09 spid39s     There is insufficient system memory in resource pool 'default' to run this query.
2018-03-14 16:19:58.11 spid55s     [ERROR] Recovery failed with error 0x83000000 on database 15. This error will be mapped to 'HK_E_RESTORE_INSUFFICIENT_MEMORY' (0x8200002e). (sql\ntdbms\hekaton\runtime\src\hkruntime.cpp : 4805 - 'HkRtRestoreDatabase')
2018-03-14 16:19:58.11 spid31s     [INFO] HkCkptCtrlUninitialize(): Database ID: [15]. Cleaning up StorageArray. LastClosedCheckpointEndTs: '155'
2018-03-14 16:19:58.20 spid31s     SQL Server shutdown has been initiated
2018-03-14 16:19:58.21 spid31s     Error: 19032, Severity: 10, State: 1. (Params:). The error is printed in terse mode because there was error during formatting. Tracing, ETW, notifications etc are skipped.
2018-03-14 16:19:58.28 spid34s     SQL Server shutdown has been initiated

After starting the service SQL Server was doing the redo and undo process, this means it was reading the log files, create the compensate log records if was found any uncommitted transaction.

SQL Server will need memory in buffer pool to complete the redo and undo process and I didn’t remember I changed any SQL Server memory configuration.

So, my approach was to connect SQL Server via command line while the instance was still up and run sp_configure to see how much memory was configured. I got only 512mb set for Max Server Memory and that was the problem. (Max server memory controls the SQL Server memory allocation, compile memory, all caches (including the buffer pool), query execution memory grants, lock manager memory, and CLR1 memory).

In my environment with 26 databases and my crash recovery tests, 512mb for my pool memory wasn’t enough and changed the configuration to 4096mb I bring the instance online again.


First, read the errorlog file to have more information what SQL Server is doing and also know transaction log operations, log records, checkpoints and how crash  recovery works is fundamental.

SQL Server as a process acquires more memory than specified by max server memory option. Both internal and external components can allocate memory outside of the buffer pool, which consumes additional memory, but the memory allocated to the buffer pool usually still represents the largest portion of memory consumed by SQL Server.



Every database has a transaction log that records all transactions and the database modifications made by each transaction

The transaction log is a critical component of the database. If there is a system failure, you will need that log to bring your database back to a consistent state.

How that works? What steps SQL Server do to keep the consistent state?

Taking a example I want to update 10 rows in my table and those rows are using 2 pages, let’s see step by step SQL Server does.

  1. Read the pages from buffer pool, if the pages isn’t there SQL Server will get that data from disk and put in memory.
  2. Start the lock process, SQL Server will acquire intent-exclusive lock for the table and pages, update row-level locks.
  3. After all rows to be updates are locked, it’s time to convert the update lock to exclusive lock and make the change to the page in memory. This process will create a log record describing the changes were made.
  4. Next the transaction is ready to commit. This step will write the log records to the transaction log on disk and will acknowledge the commit to the user.

The changed data still in the buffer pool and written in the transaction log on disk.  The data will stay in memory util a checkpoint runs and write the changes to the data files.


So, that’s why transaction logs are critical and we have to take care of it.

How to manage the transaction log

How to read transaction logs


SQL Operations Studio Linux

Great news, a visual way to manage SQL Server from Linux.

SQL Operations Studio came to fill a gap we were waiting to develop and manage SQL Server databases on Linux. In the link https://github.com/Microsoft/sqlopsstudio we can see more information like the features below.

Feature Highlights

  • Cross-Platform DB management for Windows, macOS and Linux with simple XCopy deployment
  • SQL Server Connection Management with Connection Dialog, Server Groups, and Registered Servers
  • Object Explorer supporting schema browsing and contextual command execution
  • T-SQL Query Editor with advanced coding features such as autosuggestions, error diagnostics, tooltips, formatting and peek definition
  • Query Results Viewer with advanced data grid supporting large result sets, export to JSON\CSV\Excel, query plan and charting
  • Management Dashboard supporting customizable widgets with drill-through actionable insights
  • Visual Data Editor that enables direct row insertion, update and deletion into tables
  • Backup and Restore dialogs that enables advanced customization and remote filesystem browsing, configured tasks can be executed or scripted
  • Task History window to view current task execution status, completion results with error messages and task T-SQL scripting
  • Scripting support to generate CREATE, SELECT and DROP statements for database objects
  • Workspaces with full Git integration and Find In Files support to managing T-SQL script libraries
  • Modern light-weight shell with theming, user settings, full screen support, integrated terminal and numerous other features

Let’s see how it works:

The process is very simple. First step download de file in this link https://go.microsoft.com/fwlink/?linkid=862646

After download the next step is to extract the files in a folder wherever you want and run the sqlops file.

My first impression is quite good. A good visual tool to work with SQL Server on Linux. I like the execution plan view and how to manage the databases.



This slideshow requires JavaScript.

Comparing with SQL Server Management Studio I miss some options such as:

  • LinkedServer Objects
  • Maintenance Plans
  • Replication
  • Database Mail
  • Extended Events

The conclusion is I really can work on that and run all my scripts and create objects like linkedservers, extended events or replications. That helps a lot to try a new platform concepts.


How Update works?

Update data in SQL Server is a simple task. Using the command update table set column = value where column = value. Don’t forget the where clause :).

But, what SQL Server does internally? Delete plus Insert? Modify? Well, the answer depends.

Let’s see examples how that works.

First, create a database and table for the test.


As you can see, I didn’t create an index so I will update a heap table. Before update the row, I will drop the log register and then update the row I want.


Ok, now the result of the DBCC LOG


As we can see, the result is a modified row in the log, what about using an index? Creating a nonclustered index in the column I’m updating.



The result is very different, now we can see a modified row in the heap and in the nonclustered index I got a delete/insert row. Very nice!

What if was a clustered index?


I dropped the nonclustered index and created a clustered and result is:


Again, only the modified row in the index clustered. And, if we try to update our key in the clustered index?



I got the result with delete/insert as I was expecting because I changed the key like the nonclustered index. Now, if I try to update the row with the same value, what SQL Server will do?


Nothing, no change at all.