The first checklist before start using any application using SQL Server should be the default configurations.
What configurations I used to change?
Max Server Memory: SQL Server will grab all memory available and that will cause concurrency with the OS
MAXDOP: for OLTP also yes, the good thing is to use it
Cost Threshold for Parallelism: the default value is too low, and the number will depend on the environment, I used to monitor and change as needed. I use to start with 40
Model File Sizes: the model database will provide information when need to create more databases. (initial size and autogrowth are way too small and need to avoid file system fragmentation)
Tempdb Files: look how many CPUs in the server and if the number is less than 8, so would be the number of CPUs, if it is more, so, start with 8 and the files should be the same size and the same autogrowth.
Database Mail: used to send alerts, very important to get notified when something bad happens.
Backup Compression: the benefit will be much more than the CPU cost.
Remote Dedicated Administrator Connection: provides dedicated CPU, memory, and scheduler and by default only works via RPD or physically on server
Maintenance: start as soon as possible with checkdb, index maintenance, statistics, backups. In this case, I like Ola Hallengren SQL Server solution.
The last but not least, Security: this is the most important in the list, user accounts or even DBAs as sysadmin, look my tip in MSSQLTips. In addition, disable sa, disable unnecessary logins.
The most common and widely used index that we know are nonclustered index. These indexes are created using the famous b-tree (balanced tree). B-tree is a data-structured tree where we have the root and leaves.
A nonclustered index contains the index key values and row locators that point to the storage location of the table data. Generally, nonclustered indexes should be designed to improve the performance of frequently used queries that are not covered by the clustered index.
These indexes need a regular maintenance because as data is updated, inserted or deleted these indexes are updated for the new data or delete the reference of a particular deleted data.
When this happen we called fragmentation and Brent Ozar can explain more about it.
The objective here is about maintenance the index, well not exactly who to do this, but what do not do. I’ve seen in same cases a good reindex job, everything ok, tables distributed by number of rows, cluster, then no cluster, but after all of it a step with shrink.
Do not use shrink database, this is not a good idea after a reindex. What is this? What will do in the database? In MSDN page we can see this:
Shrinking data files recovers space by moving pages of data from the end of the file to unoccupied space closer to the front of the file. When enough free space is created at the end of the file, data pages at end of the file can be deallocated and returned to the file system
The process SQL uses is ugly and results in index fragmentation again and all that work to reindex was thrown away. If don’t belive me, well Paul Randal explain something in his article.
SQL Server associates columns, expressions, variables, and parameters with data types. Data types determine what kind of data can be stored in the field: Integers, characters, dates, money, binary strings, etc.
SQL Server supplies several built-in data types but you can also define custom types
Built-in data types are categorized as shown in the table below, also you can see the precedence of convert to other data type. I mean when an operator combines two expressions of different data types, the rules for data type precedence specify that the data type with the lower precedence is converted to the data type with the higher precedence.
SQL Server uses the following precedence order for data types:
How, where, when you need to think in data storage? Well, this is the first step after you modeling your database and most companies do not think about it, I have saw companies with a large data into one disk or small data separated in the wrong way, but what is the best to do?
We have some best practices and you can read more about logical structures in SQL Server called filegroups in MSDN page. Every DBA needs to know to create and maintain filegroups because they are part of every SQL Server database. Filegroups affect the performance, maintenance, and security of your data and they are logical structures to group files together.
At a minimum, every SQL Server database has two operating system files: a data file and a log file. Data files contain data and objects such as tables, indexes, stored procedures, and views. Log files contain the information that is required to recover all transactions in the database. Data files can be grouped together in filegroups for allocation and administration purposes.
Filegroups can be created when the database is first created or created later when more files are added to the database. However, you cannot move files to a different filegroup after the files have been added to the database.
A file cannot be a member of more than one filegroup. Tables, indexes, and large object (LOB) data can be associated with a specific filegroup. This means that all their pages are allocated from the files in that filegroup.
Why use filegroups?
you have one or more objects tha have heavy read/write activity
you’ve already tuned through indexes and query writing
you need a performance boost
you have additional storage you can utilize to separate the objects
you want to separate tha data so administration tasks such as backups take less time
you have a large database (>1TB)
disaster and recovery
separate data and log files onto separate disks
separate tempdb onto its own disk
at least two filegroups – primary and one user-defined (default)
files in a filegroup should be equally sized for equal proportion of writes
use filegroups to isolate objects with heavy read and write activity from each other
The page is the fundamental unit of data storage in SQL Server. An extent is a collection of eight physically contiguous pages. Extents help efficiently manage pages.
Understanding the architecture of pages and extents is important for designing and developing databases that perform efficiently.
The fundamental unit of data storage in SQL Server is the page. The disk space allocated to a data file (.mdf or .ndf) in a database is logically divided into pages numbered contiguously from 0 to n. Disk I/O operations are performed at the page level. That is, SQL Server reads or writes whole data pages.
Extents are a collection of eight physically contiguous pages and are used to efficiently manage the pages. All pages are stored in extents.
Page size is 8 KB. This means SQL Server databases have 128 pages per megabyte. Each page begins with a 96-byte header that is used to store system information about the page.
Extents are the basic unit in which space is managed. An extent is eight physically contiguous pages, or 64 KB. This means SQL Server databases have 16 extents per megabyte.
To make its space allocation efficient, SQL Server does not allocate whole extents to tables with small amounts of data. SQL Server has two types of extents:
Uniform extents are owned by a single object; all eight pages in the extent can only be used by the owning object.
Mixed extents are shared by up to eight objects. Each of the eight pages in the extent can be owned by a different object.
A new table or index is generally allocated pages from mixed extents. When the table or index grows to the point that it has eight pages, it then switches to use uniform extents for subsequent allocations. If you create an index on an existing table that has enough rows to generate eight pages in the index, all allocations to the index are in uniform extents