SQLPS Powershell in SQL Server -4

Lets see some more cmdlets

1.Backup-Sqldatabase
first we need to check examples how to use Backup_Sqldatabase

Run this command
Get-Help Backup-Sqldatabase -Examples



It will give us examples
Similarly for all cmdlets we can get examples how to use it.


Now we will run a sample how take backup

Backup-Sqldatabase -Database Test




It will take backup of Database Test.

How to redirect  result to HTML page?

Run this command

 Invoke-Sqlcmd "select * from sys.tables" | ConvertTo-Html > c:\test.html




Lets see the result in browser by opening test.html in browser.


















Managed lock priority -SQL 2014

Managed lock priority 

Its available in SQL 2014 and used for online operation s
Mainly used for switching partition and rebuilding indexes online.

Till now its issue to switch partition while tables are in use.
Its because that requires schema modification lock SCH-M lock.
As its not possible to get this lock until any operation is running on table and hence switching partition is a nightmare for any developers.
This request for SCH-M lock was creating a chain of locks and hence affecting performance of OLTP operations.


Now in 2014 we have option that we can define priority for switching or online reindexing with other operations.
Lets see how to use this feature

If we see at syntax of alter table or alter index we will find this option


WAIT_AT_LOW_PRIORITY ( MAX_DURATION = <time> [ MINUTES ], ABORT_AFTER_WAIT =
{ NONE | SELF | BLOCKERS } )

MAX_DURATION is time for which we want to wait before completing the operation

ABORT_AFTER_WAIT :- What action we want to take after waiting time completes
NONE :- Will take no action and continue waiting
SELF :- It will kill self session
BLOCKERS:- It will kill session which are creating blocking and will do switching or
 reindexing.

But this killing session has one drawback it will lead transaction to rollback and if we dont know which operations are in rollback it may impact OLTP operations.
So we need to be careful while doing this and should be done when there is low activity or switching/reindexing has higher priority compare to other tasks.

Lets create a table and check this behavior


CREATE PARTITION FUNCTION part_test_func( datetime)
AS RANGE LEFT FOR VALUES( '2014-09-10' , '2014-09-20' , '2014-09-30');


CREATE PARTITION SCHEME part_test_scheme
AS PARTITION part_test_func TO( [primary],[primary],[primary],[primary]);


CREATE TABLE part_test( id int IDENTITY( 1 , 1) ,
                        data varchar( 100) ,
                        date datetime)
ON part_test_scheme( date);

CREATE TABLE part_test1( id int IDENTITY( 1 , 1) ,
                        data varchar( 100) ,
                        date datetime)
ON part_test_scheme( date);


INSERT INTO  part_test
VALUES( 'a' ,
        '2014-09-05'
      );
INSERT INTO part_test
VALUES( 'a' ,
        '2014-09-15'
      );
INSERT INTO part_test
VALUES( 'a' ,
        '2014-09-25'
      );
INSERT INTO part_test
VALUES( 'a' ,
        '2014-10-05'
      );



Now in other window I will run this query 

BEGIN TRAN

SELECT *
FROM   part_test WITH(updlock)









Now we will go for partition switching in first window 
We will run this query
ALTER TABLE part_test switch PARTITION 1 TO part_test1 PARTITION 1
WITH (WAIT_AT_LOW_PRIORITY (MAX_DURATION = 1 MINUTES,
ABORT_AFTER_WAIT = BLOCKERS))
It will wait for 1 minute and then kill the blocking and statement and complete 
switching.

But we also need to know which session were killed. 
We can find this details in errorlog




Computed Columns 2


Sometimes we are using some scalar function to generate computed column values
This values also not persisted until the function is marked as schema bound

Lets see example here


Computed columns 1


computed columns

use test

We used computed columns many times , but we can get most benefit of computed columns when we can mark them
as persisted.
For this columns must be deterministic.
Lets see how can we get it


How to get list of servers running in network using PowerShell

As a DBA its your repsonsibility to get list of SQL Servers running in network and manage them. As infrastructure grows its getting tough to get list of SQL Servers running in network along with their versions
It easy in Powershell
Let see how to use it

SQLPS Powershell in SQL Server -3

Now we will go for some cmdlets and see how to use them.
One question comes in your mind is how to run query from powershell?

Lets see first it
For this there is cmdlet -Invoke-Sqlcmd

SQLPS Powershell in SQL Server -2

Ok , so now we are connected to SQLPS
We will first navigate to different folders of SQL Server

SQLPS - Powershell in SQL Server

Those who are using SQL Server must have idea of Powershell eventhough they dont have used it.
Its a nice utility and sometimes makes work more easy then routine coding.
Here today first we weill get basic idea on how to use it.
Before SQL Server 2012 it was a sepereate exe SQLPS.exe, but 2012 onwards MS has added it as a seperate module for powershell. So eventhough SQLPS is available, it will be deprecated in future.

Online reindexing of partition.


Till now SQL 2012 its not possible to reindex a partition online.
We had two option

1. Either reindex entire table online
2. Reindex a single partition offline.

Nonclustered index got corrupted.

Recently we had issue of nonclustered index corruption on table. We faced this issue multiple times and we also run DBCC CHECKTABLE to get the details of index corruption. Every time we drop index and recreate index , but immediately it got corrupted. Our system was running fine since last 3 yrs and suddenly we got this issue.
Due to this our all update and insert started to fail, and since its main summary table we are helpless till we resolve this issue.
We checked errorlog also to get issue.
Below is image of the error.

How get physical location of row.


Its undocumented command and gives physical location of the row.
This helps while doing advanced trouble shooting.
Its fun to get internals of MSSQL Server doing regular tasks.

First we will look at %%physloc%%

Table Valued Parameters


Table Valued parameters are user defined table types and used to pass multiple rows  of data  as input to procedure or function.  

Benefits 

* we dont need many parameters
* we dont need temporary tables
* We dont need XML to pass multiple values
* pass multiple values of different data types
* Enable client to specify sort order and unique keys
* SQL 2012 onwards table valued parameters are cached for all queries.

Books online for SQL Server 2014

Books online for SQL 2014 is now available
Lets see how to install it.

First go to help -> Manage Help Setting






SQL Variant

SQL Variant
 Its data type that stores values of various SQL Server-supported data types.
 In certain conditions where we are not sure about datatype we can define it as sql_variant for testing purpose.

 Its supporting int,char and binary value but will not support char(max)/varchar(max)/nvarchar(max) .


sp_sequence_get_range

This proc assigns range of sequence values  to current application along with metadata.

Functions to get identity value

There are 3 functions to get current identity value

@@identity

Returns last identity value generated by last statement  in current session for all scopes
If statement fires trigger and identity value generated by that trigger than @@identity
will give value generated by that trigger

How to use sequence

What is sequence , we saw it in previous article
Now we will use it practically to understand how it works

First we will create a sequence


Local Variables

Local Variables

I would like to point one interesting property how values are assigned to local variables


SELECT *
FROM   sales.store
WHERE  SalesPersonID = 99999999


DBCC CHECKDENT




DBCC CHECKDENT

 

Gives info about current identity value , also we can reset identity value to new value.


IDENTITY Function

IDENTITY Function 

Its introduced in SQL Server 2008 , used with INTO clause to insert identity value in the table .
Its used only with SELECT INTO clause can not used with any other statement

SET IDENTITY_INSERT ON




We have created a sample table in previous article of Identity column
Now we will look how to insert values explicitly in this tables

*All column names must be specified in insert statement
* set identity_insert off after completing insert

Identity property

Identity
We can define a column with identity property in a table.
Its auto increment number and has two arguments

How to manage identity column in In-Memory tables


As there are certain limitation which I also mentioned in my previous articles.
One of them is identity is not supported.
So when there is identity column is used in table we have to change code and use SEQUENCE instead.

Lets see an example



Transact SQL options not supported on In-Memory Tables

In-Memory tables are great enhancement but also there is certain limitation also while using it.
Below is Transact SQL Constructs which are not supported in In-Memoery Tables compare to disk based tables.

Insert Statement Comparison - Interesting Test

Performance  of Inserts in Database
I just written an articel about how we can get benefit of Delayed Durability to handle large amount log activity and related  locking issues
which leads to I/O condention.
You can take a look for article at here
For this we use Insert example and showed how Delayed Durability helps for improvidng perfromance.

Delayed Durability -SQL Server 2014


One more silent but very useful feature of SQL 2014 is delayed durability.
In memory tables has also durability option but we are talking about normal tables and will see how to use it.

First we have to set database option delayed_durability.
It has 3 options DISABLED/FORCED/ALLOWED


ALTER DATABASE TEST
SET DELAYED_DURABILITY  =DISABLED/FORCED/ALLOWED



In general this option is disabled

DISABLED :- Delayed durability is disabled and all transactions are fully durable.
ALLOWED:- Each transactions durability is determined at transaction level
by setting option Delayed_Durability = ON/OFF
FORCED:-Every transaction is delayed durable.

To enable from SSMS






Lets see the example to understand it better


CREATE TABLE T
(ID INT,
DATA VARCHAR(100))


 

Now we will insert data into table
First we will insert data by setting disabled.





DECLARE @DATE1 DATETIME
DECLARE @DATE2 DATETIME

ALTER DATABASE TEST
SET DELAYED_DURABILITY  =DISABLED

TRUNCATE TABLE T


   SELECT * INTO #before_disabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE1 =  GETDATE()

SET NOCOUNT ON

DECLARE @I INT =1
WHILE @I <= 100000
       BEGIN
              INSERT INTO T VALUES (@I, 'DATA'+ CONVERT(VARCHAR(10),@I))
              SET @I = @I+1
       END

SET NOCOUNT OFF

SELECT * INTO #after_disabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE2 =  GETDATE()
SELECT DATEDIFF (SS, @DATE1,@DATE2)


SELECT 
                B.DATABASE_ID, B.[FILE_ID],
                           CASE WHEN A.FILE_ID = 1 THEN 'LOG' ELSE 'DATA' END AS FILE_TYPE,

                A.NUM_OF_READS - B.NUM_OF_READS AS NUM_OF_READS,
                A.NUM_OF_BYTES_READ - B.NUM_OF_BYTES_READ AS NUM_OF_BYTES_READ,
                A.NUM_OF_WRITES - B.NUM_OF_WRITES AS NUM_OF_WRITES,
                (A.NUM_OF_BYTES_WRITTEN - B.NUM_OF_BYTES_WRITTEN)/(1024*1024) AS NUM_OF_BYTES_WRITTEN_MB
            FROM
               #BEFORE_DISABLED AS B
               INNER JOIN #AFTER_DISABLED A ON B.DATABASE_ID = A.DATABASE_ID AND B.[FILE_ID] = A.[FILE_ID]

GO

DECLARE @DATE1 DATETIME
DECLARE @DATE2 DATETIME


SELECT @DATE1 =  GETDATE()

ALTER DATABASE TEST
SET DELAYED_DURABILITY  =FORCED

TRUNCATE TABLE T

SELECT * INTO #before_enabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SET NOCOUNT ON

DECLARE @I INT =1
WHILE @I <= 100000
       BEGIN
              INSERT INTO T VALUES (@I, 'DATA'+ CONVERT(VARCHAR(10),@I))
              SET @I = @I+1
       END

SET NOCOUNT OFF

SELECT * INTO #after_enabled FROM sys.dm_io_virtual_file_stats(DB_ID('Test'), NULL);

SELECT @DATE2 =  GETDATE()

SELECT DATEDIFF (SS, @DATE1,@DATE2)


SELECT 
                B.DATABASE_ID, B.[FILE_ID],
                           CASE WHEN A.FILE_ID = 1 THEN 'LOG' ELSE 'DATA' END AS FILE_TYPE,

                A.NUM_OF_READS - B.NUM_OF_READS AS NUM_OF_READS,
                A.NUM_OF_BYTES_READ - B.NUM_OF_BYTES_READ AS NUM_OF_BYTES_READ,
                A.NUM_OF_WRITES - B.NUM_OF_WRITES AS NUM_OF_WRITES,
                (A.NUM_OF_BYTES_WRITTEN - B.NUM_OF_BYTES_WRITTEN)/(1024*1024) AS NUM_OF_BYTES_WRITTEN_MB
            FROM
               #BEFORE_ENABLED AS B
               INNER JOIN #AFTER_ENABLED A ON B.DATABASE_ID = A.DATABASE_ID AND B.[FILE_ID] = A.[FILE_ID]








Here we can see when we are using delayed_durability = on the insert query completes within 4 seconds compare to 22 seconds with delayed_druability = off.

First query takes 100008 writes and 59 MB data was written

After enabling delayed durability it took only 565 writes and 32 MB data was written.

AS we can see the performance is amazing and it can be idea to solve many io related issue where log file and locking is  most contending,

Also lets see how much IO was required for each transaction .




Its because its less likely to contend for log IO. Data committed from buffer in large amount reduces contention much more than writing
individual entry for log.
Here  risk is that if system crashes due to any reason before data is written from buffer to disk, we will loss data.

Important points  to remember.

1. Transactions are asynchronous.
2. Transactions becomes durable when log entries flushed to Disk.
3. It writes transaction log in batches instead individual writes for each operation .
4. Reducing waits in the system
5. Reduces I/O requirements
6. Log is written to disk when buffer fills or durable transaction commits in same DB or running system proc SP_FLUSH_LOG.



Popular Posts