Insert Statement Comparison - Interesting Test

Performance  of Inserts in Database
I just written an articel about how we can get benefit of Delayed Durability to handle large amount log activity and related  locking issues
which leads to I/O condention.
You can take a look for article at here
For this we use Insert example and showed how Delayed Durability helps for improvidng perfromance.

Delayed Durability -SQL Server 2014


One more silent but very useful feature of SQL 2014 is delayed durability.
In memory tables has also durability option but we are talking about normal tables and will see how to use it.

First we have to set database option delayed_durability.
It has 3 options DISABLED/FORCED/ALLOWED


ALTER DATABASE TEST
SET DELAYED_DURABILITY  =DISABLED/FORCED/ALLOWED



In general this option is disabled

DISABLED :- Delayed durability is disabled and all transactions are fully durable.
ALLOWED:- Each transactions durability is determined at transaction level
by setting option Delayed_Durability = ON/OFF
FORCED:-Every transaction is delayed durable.

To enable from SSMS






Lets see the example to understand it better


CREATE TABLE T
(ID INT,
DATA VARCHAR(100))


 

Now we will insert data into table
First we will insert data by setting disabled.





DECLARE @DATE1 DATETIME
DECLARE @DATE2 DATETIME

ALTER DATABASE TEST
SET DELAYED_DURABILITY  =DISABLED

TRUNCATE TABLE T


   SELECT * INTO #before_disabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE1 =  GETDATE()

SET NOCOUNT ON

DECLARE @I INT =1
WHILE @I <= 100000
       BEGIN
              INSERT INTO T VALUES (@I, 'DATA'+ CONVERT(VARCHAR(10),@I))
              SET @I = @I+1
       END

SET NOCOUNT OFF

SELECT * INTO #after_disabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE2 =  GETDATE()
SELECT DATEDIFF (SS, @DATE1,@DATE2)


SELECT 
                B.DATABASE_ID, B.[FILE_ID],
                           CASE WHEN A.FILE_ID = 1 THEN 'LOG' ELSE 'DATA' END AS FILE_TYPE,

                A.NUM_OF_READS - B.NUM_OF_READS AS NUM_OF_READS,
                A.NUM_OF_BYTES_READ - B.NUM_OF_BYTES_READ AS NUM_OF_BYTES_READ,
                A.NUM_OF_WRITES - B.NUM_OF_WRITES AS NUM_OF_WRITES,
                (A.NUM_OF_BYTES_WRITTEN - B.NUM_OF_BYTES_WRITTEN)/(1024*1024) AS NUM_OF_BYTES_WRITTEN_MB
            FROM
               #BEFORE_DISABLED AS B
               INNER JOIN #AFTER_DISABLED A ON B.DATABASE_ID = A.DATABASE_ID AND B.[FILE_ID] = A.[FILE_ID]

GO

DECLARE @DATE1 DATETIME
DECLARE @DATE2 DATETIME


SELECT @DATE1 =  GETDATE()

ALTER DATABASE TEST
SET DELAYED_DURABILITY  =FORCED

TRUNCATE TABLE T

SELECT * INTO #before_enabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SET NOCOUNT ON

DECLARE @I INT =1
WHILE @I <= 100000
       BEGIN
              INSERT INTO T VALUES (@I, 'DATA'+ CONVERT(VARCHAR(10),@I))
              SET @I = @I+1
       END

SET NOCOUNT OFF

SELECT * INTO #after_enabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE2 =  GETDATE()

SELECT DATEDIFF (SS, @DATE1,@DATE2)


SELECT 
                B.DATABASE_ID, B.[FILE_ID],
                           CASE WHEN A.FILE_ID = 1 THEN 'LOG' ELSE 'DATA' END AS FILE_TYPE,

                A.NUM_OF_READS - B.NUM_OF_READS AS NUM_OF_READS,
                A.NUM_OF_BYTES_READ - B.NUM_OF_BYTES_READ AS NUM_OF_BYTES_READ,
                A.NUM_OF_WRITES - B.NUM_OF_WRITES AS NUM_OF_WRITES,
                (A.NUM_OF_BYTES_WRITTEN - B.NUM_OF_BYTES_WRITTEN)/(1024*1024) AS NUM_OF_BYTES_WRITTEN_MB
            FROM
               #BEFORE_ENABLED AS B
               INNER JOIN #AFTER_ENABLED A ON B.DATABASE_ID = A.DATABASE_ID AND B.[FILE_ID] = A.[FILE_ID]








Here we can see when we are using delayed_durability = on the insert query completes within 4 seconds compare to 22 seconds with delayed_druability = off.

First query takes 100008 writes and 59 MB data was written

After enabling delayed durability it took only 565 writes and 32 MB data was written.

AS we can see the performance is amazing and it can be idea to solve many io related issue where log file and locking is  most contending,

Also lets see how much IO was required for each transaction .




Its because its less likely to contend for log IO. Data committed from buffer in large amount reduces contention much more than writing
individual entry for log.
Here  risk is that if system crashes due to any reason before data is written from buffer to disk, we will loss data.

Important points  to remember.

1. Transactions are asynchronous.
2. Transactions becomes durable when log entries flushed to Disk.
3. It writes transaction log in batches instead individual writes for each operation .
4. Reducing waits in the system
5. Reduces I/O requirements
6. Log is written to disk when buffer fills or durable transaction commits in same DB or running system proc SP_FLUSH_LOG.



Popular Posts