Overview of SQL 2016 ‘Always Encrypted’ Security feature.

In couple of my last blogs, we discussed about various security options with SQL 2016. Now in this blog, we are going to discuss & demonstrate one other important security feature of SQL 2016 that is called as ‘Always Encrypted’.

The legacy features provide encryption either on network layer (SSL) OR data files (MDF) OR Backup files(BAK) and encryption happens for all data OR none. But ‘Always encrypted’ uses client application layer via ADO.Net to encrypt/decrypt column specific data. You may have table where ‘FirstName’ & ‘LastName’ is in plain text but the ‘passport no’ is encrypted. The ‘passport no’ will always be in cipher text during transition via network and will save in encrypted format in data files OR backup files. This feature also available for SQL azure so even Microsoft will see ecrypted data.

Let’s use following steps to demonstrate how Always encrypted can be configured & use in test/production environments of SQL 2016.

---Create AlwaysEncrptedDemo database. 
USE [master]
CREATE DATABASE AlwaysEncryptedDemo
GO
---Create table with 6 columns.   
USE [AlwaysEncryptedDemo]
CREATE TABLE EmployeeData
(
       EMPID INT Primary key,
       FirstName VARCHAR(100), 
       LastName VARCHAR(100),
       MobileNo BIGINT, 
       SALARY BIGINT,
       PassportNo VARCHAR(100)
)
GO

-- Insert 5 records. 
USE [AlwaysEncryptedDemo]
INSERT INTO EmployeeData VALUES
       (1, 'Pradeep', 'Papnai', 9595046647, 50000, 'ATNPP12345'), 
       (2, 'Vikram', 'Jadhav', 8088145698, 52000, 'YAG8383P'),
       (3, 'Darshan', 'Bandodkar', 9198234567, 51000, 'DD123453'),
       (4, 'Ajay', 'Jain', 875691232, 55000, 'AJ123JK12'),
       (5, 'Yogesh', 'Ghute', 8787453212, 49000, 'PT9876KA12')

Normal retrieval of rows.

-- Retrieve all records, our next action will encrypt last three columns those are confidential. 
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo
FROM EmployeeData

FirstSelect

Configure Always Encrypted on columns of a table

Please note these step can also be done using SQL query & Powershell but it’s recommended to do it using wizard for better understanding.

  1. Expand ‘AlwaysEncrptedDemo’ database, Right click on ‘EmployeeData’, select ‘Encrypt columns’
  2. Click NEXT on introduction page.
  3. Check on columns ‘MobileNo, Salary, PassportNo. Select EncrptionType click NEXT.

There are two type of encryption is supported:-

Deterministic – Always encrypts to the same cipher text means decrypted data will always be the same and data can be indexed.

Randomized – It’s consider as more secure as the cipher text will always be different.  The data of this column cannot be indexed.

  1. Key Store provider = Windows Certificate Store, Master key source = current user, click NEXT.
  2. Select ‘Proceed to finish now’, although you can generate PowerShell next time you do it, click NEXT.
  3. Verify the choice you made, click ‘NEXT’.
  4. Encryption might take time due to size of column present in table. In production system, it is usually performed off peak hours.

Here is the slide show for above mentioned steps:-

This slideshow requires JavaScript.

-- Now if you run select query again the results for column MobileNo, Salary, PassportNo will be encrypt
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName, MobileNo, SALARY, PassportNo
FROM EmployeeData

EncryptSelect

If you wish to see encrypted data using SQL Management studio then the connection string should include following line ‘column encryption setting=enabled’

Right click on query windows, connection-change connection, Additional Connection Parameters

SSMS_Connection_string

-- Now the columns can be seen in without encryption. 
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName, MobileNo, SALARY, PassportNo
FROM EmployeeData

FirstSelect

Well this is not the end of story. If you connect to this SQL server using SSMW from other machine you will notice columns are encrypted even though connection string encrypted is enabled, it’s because you don’t have certificate for encryption.

Always encrypted feature maintain two keys :-

Column Master Key – this is an encryption key that protects the column encryption key(s). You must have at least one master key before encrypting any columns.

Column Encryption Key – this is the encryption key that actually protects that encrypted columns.

Keys

You can either use existing keys OR create a new one. In order to copy the certificate, use following steps.

Click on ‘RUN’, Type ‘MMC’ and press the ENTER key. On the ‘File’ menu, click on ‘Add/Remove Snap In’.
Select ‘Certificates’. Select ‘My user account’ and click ‘Finish’ and ‘OK’.
Expand ‘Personal’, then ‘Certificate’, Select ‘Always Encrypted auto certificate’

Certificate 1

You can export this certificate to other machine where ever client OR application connects to SQL servers for encryption purpose. Here is example how connection string can be used with PowerShell.

Use case without encryption connection string

#Change SQL server name
$connectionString = "Data Source=SQL1;Integrated Security=true;Initial Catalog=AlwaysEncryptedDemo;"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query = “SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo FROM EmployeeData”
$command = $connection.CreateCommand()
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object "System.Data.DataTable"
$table.load($result)
$table | format-table 
$connection.Close()

Powershell_Without_S.jpg

Use case with encryption connection string

$connectionString = "Data Source=SQL1;Integrated Security=true;Initial Catalog=AlwaysEncryptedDemo;column encryption setting=enabled"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query = “SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo FROM EmployeeData”
$command = $connection.CreateCommand()
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object "System.Data.DataTable"
$table.load($result)
$table | format-table 
$connection.Close()

With_connection_string.jpg

More Details

Advertisements

#sql, #sql-security, #sql2016

Row Level Security (RLS)

SQL 2016 comes up with a new developer feature called Row Level Security (RLS) that use predicate based security feature for controlling row access on the attribute of user login. Such as an example below:

A DB contain details of all books, author and associated cost. When an author supply his credential he should be able to access only books he wrote with total cost. Such as following query and output.

RLS

RLS works on following two key concepts for implementing security.

Predicate function – An inline table-valued function that implements access control logic (for example, returning a row depending on the principal_name, role, or other attributes of the calling user).

Security predicate – Glues a predicate function to a table (for example, applying a function that checks for rep name or the rep manager role to the Accounts table).

Let’s use following code and screenshot for better understanding:-

--Create sample database. 
USE [master]
CREATE DATABASE RLSDEMO
GO

--Create 'EmployeeData' table. 
USE [RLSDEMO]
CREATE TABLE EmployeeData 
(
       EMPID INT PRIMARY KEY, 
       Firstname VARCHAR(50),
       LastName VARCHAR(50)
)
GO

----Create another table 'books'. 
USE [RLSDEMO]
CREATE TABLE Books
(
       BOOKID INT PRIMARY KEY,
       BookName VARCHAR(100),
       Cost INT,
       AuthorID INT FOREIGN KEY REFERENCES EmployeeData (EMPID)
)      
GO

----Insert records in employee table. 
USE [RLSDEMO]
INSERT INTO EmployeeData VALUES
(1, 'Vikram', 'Jadhav'), 
(2, 'Pradeep', 'Papnai'),
(3, 'Darshan', 'Bandodkar'), 
(4, 'Ajay', 'Jain')
GO 

----Insert records in books table. 
USE [RLSDEMO]
INSERT INTO Books VALUES
(1, 'Troubleshooting Vault Cache', 5000, 1),
(2, 'Troubleshooting Exchange', 12000, 1),
(3, 'PST migration black book', 5000, 1),
(4, 'EV & MAC client', 5000, 2),
(5, 'EV DB on SQL 2K16', 11000, 2), 
(6, 'Storage overview', 4000, 2),
(7, 'FSA crash overview', 6000, 3),
(8, 'FSAUtility basis', 3000, 3),
(9, 'SSMW-Server migration', 1500, 3),
(10, 'Dtrace fundamentals', 2000, 1), 
(11, 'Exchange mail flow', 4000, 1),
(12,'Networking tools NetMON-wireshark',3400, 3)
GO

----Create view for select query simplicity. 
CREATE VIEW BooksCost
AS SELECT 
       EmployeeData.Firstname,
       EmployeeData.LastName,
       Books.BOOKID,
       Books.BookName, 
       Books.Cost
FROM EmployeeData
INNER JOIN Books ON Books.AuthorID=EmployeeData.EMPID
GO

--Create a ‘Manager’ and 3 other users those are also employees and the author of various books. We will assume first name as loginid. 
CREATE USER Manager WITHOUT LOGIN
CREATE USER vikram WITHOUT LOGIN
CREATE USER pradeep WITHOUT LOGIN
CREATE USER darshan WITHOUT LOGIN
GO

---- Grant full access to the manager user, and read-only access to the remaining users(authors). 
GRANT SELECT, INSERT, UPDATE, DELETE ON BooksCost TO Manager
GRANT SELECT ON BooksCost TO vikram
GRANT SELECT ON BooksCost TO pradeep
GRANT SELECT ON BooksCost TO darshan
GO

--- Create a new schema for the predicate function, it NOT mandate but recommended. 
CREATE SCHEMA SEC
GO

--- Create the predicate function
CREATE FUNCTION sec.fn_Author_Predicate(@Username AS varchar(50))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
-- Return 1 if the connection username matches the @Username parameter
-- or if the connection username is Manager
SELECT
1
AS
Author_predicate_result
WHERE
@Username=USER_NAME()OR USER_NAME()='Manager'
GO

-- Create and enable a security policy adding the function as a filter predicate
USE [RLSDEMO]
CREATE SECURITY POLICY sec.EmployeePolicyFilter
       ADD FILTER PREDICATE sec.fn_Author_Predicate(Firstname) 
       ON dbo.EmployeeData
       WITH (STATE = ON)
GO

Test 1:-

--- As vikram, we get just the rows he own with total books and cost
EXECUTE AS USER = 'vikram'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

vikram.jpg

Test 2:-

--- As pradeep, we get just the rows he own with total books and cost
EXECUTE AS USER = 'pradeep'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST (INR)', sum(COST) FROM BooksCost
REVERT
GO

pradeep.jpg

Test 3:-

--- As darshan, we get just the rows he own with total books and cost
EXECUTE AS USER = 'darshan'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

darshan.jpg

Test 4:-

--- As Manager, we get all rows
EXECUTE AS USER = 'Manager'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

Manager.jpg

Disable / Enable RLS

--- We can toggle the security policy on and off
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = OFF)

---Enable again
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = ON)

More details from MSDN

#rls, #sql, #sql2016

Dynamic Data Masking (DDM)

In your SQL table ‘Employee’, you may have sensitive information such as credit card, passport no, mobile phone…etc and you don’t want to expose the sensitive information to non-privileged users but at the same time you wish to have access to remaining column such as EmployeeID, location, Employee name to every user having database/table level permission.

A new feature “Dynamic Data Masking (DDM)” introduced with SQL 2016 may solve above requirement without having complex set of front end OR DB level coding. DDM limits sensitive data exposure by masking it to non-privileged users.

There are four type of masking can be configure during table creation.

Default Masks the data according to the field data type; if the field data type is binary, varbinary or image, a single byte of binary value 0 will be used to mask that field. For the date and time data types, the 01.01.1900 00:00:00.0000000 value will be used to mask that date field. If the data type of the masked field is one of the numeric data types, a zero value will be used to mask that field.

Partial Define your own mask for the specified field by exposing the first and last letters defined by the prefix and suffix and add a padding that will be shown in the middle in the form of prefix

Email Mask the fields that store the email address.

Random Mask any numeric data type by replacing the original value with a random value within the range specified in that function.

A table definition will look like below.

/*First create a dedicate database*/
USE [master]
CREATE DATABASE DDMDEMO
GO

/*Create table with Dynamic Data Masking enabled*/
USE [DDMDEMO]
CREATE TABLE EmployeeData
(
EMPID INT   MASKED WITH (FUNCTION ='random(1, 12)') IDENTITY(1,1), ---Randam mask, int will be replace by any random valube between 1 to 12
FIRSTNAME VARCHAR(100), --Leave this column without masking. 
LASTNAME VARCHAR (100) MASKED WITH (FUNCTION='partial(2, "...", 2)') NULL, ---Example of random mask, first two character ofstart & end will show and middle characters will be masked.  
PHONENo VARCHAR(50) MASKED WITH (FUNCTION='default()') NULL, ---Example of default mask.
EMAIL VARCHAR(100) MASKED WITH (FUNCTION = 'email()') NULL ---Example of email masking function
)
GO

Populate table with few rows.

USE [DDMDEMO]
INSERT INTO EmployeeData 
 VALUES
('Pradeep', 'Papnai', '919595852246', 'papnai.pradeep@gmail.com'),
('Darhan', 'Bandodkar','91959568524','darshan.bandokar@hotmail.com'),
('Vikram', 'Jadhav', '915682147562', 'Vikram.jadhav@hotmail.com'),
('Ajay', 'Jain', '9545684575','ajay.jain@yahoo.com'),
('Yogesh','Ghute', '959586575826', 'yogesh.ghute@city.com')
GO

By default, DBO will have unmask permission and he can retrieve all columns.

USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
GO

dboretrive.jpg

Now create a ‘TestUser’ without login and give ‘Select’ permission on the table.

CREATE USER TestUser WITHOUT LOGIN
GRANT SELECT ON EmployeeData TO TestUser
GO

Retrieve rows using ‘Testuser’ credential.

EXECUTE AS USER = 'TestUser'
USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
REVERT
GO

The output would be same as you might have expected while defining masking for columns ‘LastName’, ‘Phoneno’ & ‘Email’.

userretrive.jpg

By default, DBO have un-mask privilege and this permission can be assign to any other legitimate user. Let’s do the same using query.

GRANT UNMASK TO TestUser
EXECUTE AS USER = 'TestUser'
USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
REVERT
GO

dboretrive

Few things to note:-

  • Creating a mask on a column does not prevent updates to that column. So although users receive masked data when querying the masked column, the same users can update the data if they have write permissions.
  • A masking rule cannot be defined for the column types such as Encrypted columns OR FILESTREAM
  • Use the masked_columns view to query for table-columns that have a masking function applied to them
USE DDMDEMO
SELECT c.name, tbl.name as table_name, c.is_masked, c.masking_function  
FROM sys.masked_columns AS
JOIN sys.tables AS tbl   
    ON c.[object_id] = tbl.[object_id]  
WHERE is_masked = 1; 

dmv.jpg

More details about DDM

#ddm, #dymamic-data-masking, #sql, #sql-mask, #sql2016

SQL 2016 Live Query Statistics

In my last post we learned how ‘Query Store’ works, now this is the time to show one other amazing feature called ‘Live Query Statistic’ that is also released with SQL 2016. Just to clarify this feature is included with SQL Management studio 2016 that means it will work if it connects to SQL version 2014.

Let’s now see how this feature can be beneficial for DBA/Consultant OR support engineers.

The steps of preparing LAB environment is same as steps mentioned in ‘Query store’ Blog.

/* Please note, the database and tables are design only for training purpose so it doesn't necessarily have the best design.*/
-- Create Database & two tables 'Region' and 'EmployeeData'
USE [master]
CREATE DATABASE QueryStoreDemo
GO

USE [QueryStoreDemo]
CREATE TABLE Region
       (
       ID INT IDENTITY(1,1),
       RNAME VARCHAR (100),
       HQ VARCHAR(100)
       )
GO

USE [QueryStoreDemo]
CREATE TABLE EmployeeCity
       (
       ID INT IDENTITY (1, 1),
       NAME VARCHAR (100),
       City VARCHAR(100), 
       RegionID INT
       )
GO

-- Insert values, the second insertion might take time (3-5 minutes) as we are inserting 6 rows 50K times. 
USE [QueryStoreDemo]
INSERT INTO Region  VALUES
       ('PUNE', 'SOUTH'),
       ('GOA', 'EAST'),
       ('NASHIK', 'WEST'),
       ('NASIK','NORTH'),
       ('SOLAPUR','Center')
GO

USE [QueryStoreDemo]
INSERT INTO EmployeeCity
VALUES
       ('Pradeep', 'Pune', 1),  
       ('Vikram', 'Mumbai', 2),
       ('Darshan', 'Goa', 3),
       ('AJAY', 'Nashik', 4),
       ('Yogesh', 'Solapur', 5)
GO 50000

Continue reading

#live-query-statistics, #query-plan, #sql, #sql2016

SQL 2016 Query Store

One of amazing new feature that came up with SQL 2016 release is ‘Query store’. This is going to be the game changer and most useful for SQL DBA, Support Engineers and consultant those used to rely on ‘plan cache’ (AKS ‘Procedure Cache’) to track down query statistics and execution plans for queries run in past. Plan cache is useful to know what got change after XYZ changes such as missing index, application upgrade…etc.

The biggest disadvantage with ‘planned cache’ is that it remain in memory and get flush out once SQL service restart (OR sever reboot). High memory pressure on SQL Instance also cause plan cache to flush out from memory.

Now with SQL 2016, if ‘Query store’ is enabled, execution plans are saved on disk and can be accessed even if server is rebooted. This new feature is enabled on an individual database with minimum configuration change. The saved execution plan can be accessed using GUI reports OR built in DMV queries.

Continue reading

#plancache, #query-performance, #querystore, #sql

SQL server database mirroring and practical demonstration.

SQL Server database mirroring is a disaster recovery and high availability technique that involves two or more SQL Server instances on the same or different machines. Mirroring is implemented on a per-database basis and works only with databases that use the full recovery model.

Active copy of database is known as ‘Principal’ database & secondary copy is known as ‘Mirrored’ database. SQL Server instance that host principal database known as ‘Primary’ and server that holds mirrored copy known as ‘Secondary’ server.  In special cases, there can be a third SQL Server instance that acts as a Witness.

Database mirroring involves redoing every insert, update, and delete operation that occurs on the principal database onto the mirror database as quickly as possible. Redoing is accomplished by sending a stream of active transaction log records to the mirror server, which applies log records to the mirror database, in sequence, as quickly as possible. Unlike replication, which works at the logical level, database mirroring works at the level of the physical log record.

Please note, Microsoft has depreciated this feature and may not be available in SQL 2016 OR later version of SQL. Avoid using this feature in new development work, and plan to modify applications that currently use this feature. Use Always-On Availability Groups instead.


Following steps can be used to setup SQL mirroring for a database reside on two different instances. In my test environment, I am using two SQL instance on different machine (SQL1 & SQL2). Version of SQL is 2014 & operating system is 2012 R2.


Verify SQL services running with domain user credential on both servers running.

1mirroring

On the primary SQL server, execute following script one by one.


----A. Create sample database for mirroring.
USE [master]
CREATE DATABASE MirroringDemo
GO

ALTER DATABASE [MirroringDemo] SET RECOVERY FULL
GO

----B. Create table schema.

USE [MirroringDemo]
CREATE TABLE EmployeeCity
(
ID INT IDENTITY(1,1),
NAME VARCHAR (100),
City VARCHAR(100)
)
GO

----C. Insert 100 records
USE [MirroringDemo]
INSERT INTO EmployeeCity
VALUES
('Pradeep', 'Pune')
GO 100

----D. Take database backup.

BACKUP DATABASE [MirroringDemo]
TO  DISK = N'D:\Backup\MirroringDemo.bak'
GO


On Secondary SQL server, execute following script one by one.

----A. Restore sample database.

USE [master]
RESTORE DATABASE [MirroringDemo]
FROM  DISK = N'D:\Backup\MirroringDemo.bak'
WITH   NORECOVERY
GO

-----B. Create mirroring end point (if mirroring endpoint already exists then drop it using query ‘drop endpoint ’

CREATE ENDPOINT endpoint_mirroring
STATE = STARTED
AS TCP (LISTENER_PORT = 5022)
FOR DATABASE_MIRRORING (ROLE=PARTNER);
GO

 ----C. Create SQL login for service a/c, if login already exists then leave this step.

USE [master]
CREATE LOGIN [domain\sqlservice] FROM
WINDOWS WITH DEFAULT_DATABASE=[master]
GO

 ----D. Assign permission on endpoint to SQL service a/c.

USE [master]
GRANT CONNECT ON ENDPOINT::endpoint_mirroring to [domain\sqlservice]
GO

----E. This is important step, set mirroring partner now. The URL should point to first server. This should always be performed on secondary database first.

ALTER DATABASE [MirroringDemo]
SET PARTNER = 'TCP://SQL1.Domain.LOCAL:5022'
GO


Go back to primary server, execute following script one by one.

-----A. Create mirroring end point (if mirroring endpoint already exists then drop it using query ‘drop endpoint ’

CREATE ENDPOINT endpoint_mirroring
STATE = STARTED
AS TCP (LISTENER_PORT = 5022)
FOR DATABASE_MIRRORING (ROLE=PARTNER);
GO

----B. Create SQL login for service a/c, if login already exists then leave this step.
USE [master]
CREATE LOGIN [domain\sqlservice] FROM WINDOWS WITH DEFAULT_DATABASE=[master]
GO

 ----C. Assign permission on endpoint to SQL service a/c.

USE [master]
GRANT CONNECT ON ENDPOINT::endpoint_mirroring to [domain\sqlservice]
GO

----D. Set mirroring partner now. The URL should point to Second server.

ALTER DATABASE [MirroringDemo]
SET PARTNER = 'TCP://SQL2.EV.LOCAL:5022'
GO


The database mirroring is ready now. The database will now be visible like below.


2mirroring.jpg


SQL Server database mirroring can be set to provide high availability or disaster recovery. Depending on the needs, a DBA can choose among three available modes. By default, SQL mirroring works in high safety mode.

High safety without automatic fail-over – Transaction logs are written and committed on the principal and mirror databases synchronously. This might produce delay and slower operation because transaction log must be committed on both databases before considering it secure. This does not provide automatic fail-over and DBA needs to manually change mirrored copy to principal one when primary server is not available.

High safety with automatic fail-over –This require one additional server called as witness server and works same as high safety mode. If the principal database goes down, the automatic fail-over process initiate by witness server, then mirrored database becomes the principal.

High performance – the asynchronous communication, transaction logs are written and committed on the principal server first, and later sent and committed to the mirror server. Automatic fail-over isn’t possible. When primary server is down then DBA needs to manually force mirrored copy to become principal database, this may cause data loss due to asynchronous communication between database. To reduce data loss, take the tail of the log backup if the failed server allows, remove mirroring and restore the tail of the log on the previously mirrored database

----Change safety mode
USE [master]
ALTER DATABASE [MirroringDemo] SET SAFETY OFF


Use following syntax’s for pause, resume, remove OR fail-over mirrored database.

----Pause mirroring.
ALTER DATABASE [MirroringDemo] SET PARTNER SUSPEND
GO

----Resume mirroring:
ALTER DATABASE [MirroringDemo] SET PARTNER RESUME
GO

----Failover
ALTER DATABASE [MirroringDemo] SET PARTNER FAILOVER
GO

--Remove mirroring
ALTER DATABASE [MirroringDemo] SET PARTNER OFF
GO

My suggestion of every students to learn how things can be configure using SSMS for understanding the concepts. Configuration using SQL query greatly helps for documentation purpose, reapplying same steps and repetitive task. Here is the slide show of SQL mirroring using SSMS.

This slideshow requires JavaScript.


You can monitor the health of database mirroring by a feature called ‘database mirroring monitor.

13mirroring

14mirroring.jpg


SQL DMV to check mirroring details.

SELECT
               db_name(SM.[database_id])         AS [Database Name]
               ,SM.mirroring_state               AS [Mirror State]
               ,SM.mirroring_state_desc          AS [Mirror State]
               ,SM.mirroring_partner_name        AS [Partner Name]
               ,SM.mirroring_partner_instance    AS [Parnter Instance]
               ,SM.mirroring_role_desc           AS [Mirror Role]
               ,SM.mirroring_safety_level_desc   AS [Safety Level]
               ,SM.mirroring_witness_name        AS [Witness]
               ,SM.mirroring_connection_timeout  AS [Timeout(sec)]
               ,SM.mirroring_redo_queue          AS [REDO QUEUE]
FROM sys.database_mirroring AS SM
WHERE mirroring_guid IS NOT null
ORDER BY [Database Name];

15mirroring.jpg


All content provided on this blog is for informational purposes only. The reason for this blog is to document the things that I have learned reading blogs/books/self about Windows Server. We do not own any responsibility for the events that may happen because of using/applying the code/concepts mentioned in this blog.

If you like above content then feel free to share it with your colleague  and friends. If you wish to improve the content the feel free to comment and contact me directly.

#database-mirroring, #disaster-recovery, #sql

SQL server Log Shipping overview and practical demonstration.

SQL Server Log shipping allows administrator to automatically send transaction log backups from a primary SQL instance to one or more secondary SQL instances using agent job in background. Following are the benefits of using log shipping feature.

  • Provides a disaster-recovery solution.
  • Supports read-only access on secondary databases.
  • Accidental changes can be avoided as the delay of log copy can be configure based on requirement.

Let’s now see how log shipping can be configured in test environment. Following steps are performed on SQL 2014 with Windows 2012 R2 server as an operation system.

Open SQL Server Manager Studio (SSMS), execute following query to create a sample database, sample table then populate it with 100 initial rows.

USE [master]
CREATE DATABASE [LogShippingDemo]
GO
ALTER DATABASE [LogShippingDemo] SET RECOVERY FULL 
GO
 
--Create a sample table that holds name & city. 
Use [LogShippingDemo]
Create Table EmployeeCity
( 
ID INT IDENTITY(1,1), 
NAME VARCHAR (100),  
City VARCHAR(100)    
)
GO
 
--Insert 100 records
Use [LogShippingDemo]
INSERT INTO EmployeeCity
Values ('Pradeep', 'Pune')
GO 100

USE LogShippingDemo
SELECT COUNT(*) FROM EmployeeCity 

The basic requirement for log shipping is to have SQL agent service running under domain account having permission on shared location that is accessible from SQL server.

Open SQL Server Configuration Manager (SSCM) to confirm agent service is running under domain account.

2log

Open SQL server management studio, select ‘LogShippingDemo’ database then select ‘Transaction log shipping’ option from the database properties.

1log


Define shared location that is accessible from all the SQL servers participating in log shipping. Additionally define agent job properties that will be responsible for taking backup of transaction log from source (or primary) server.

Connect to secondary SQL server and configure secondary database settings. Administration can either take a new set of full backup OR use an existing set of full backup.

4log.jpg


Define the local folder and agent name of secondary SQL server that is responsible for copy transaction log from shared location.

5log.jpg


Select restore job properties and define secondary database state.

6log.jpg

Restoring – This mode is also known as NORECOVERY mode and cannot be used for read operations.

Standby / Read-Only – This mode is also known as STANDBY mode and can be used for read operations. Transactions which are in process on the primary or any uncommitted transactions cannot be read on the secondary database when the database is in a read-only state.

7log.jpg


After completion of above configuration, don’t forget to generate script for above operation for your future reference.

8log.jpg

A successfully completion of log shipping will show following window.

9log.jpg


Let’s now insert, 100 more records in primary database.

USE LogShippingDemo
INSERT INTO EmployeeCity
Values
     ('Pradeep', 'Pune')
GO 100
You may not see updated rows immediately as there could be configuration delay in backup, copy & restore of transaction log. If you wish to see the change then run following job manually.
LSBackup_logShippingDemo Runs on primary server, responsible for copying transaction log on source server and paste it to shared location.
LSCopy_SQL1_LogShippingDemo Runs on secondary server, responsible for copying transaction log from share location to local location of secondary server.
LSRestore_sql1_LogShippingDemo Runs on secondary server, restore transaction log to secondary database.

Please note:-

All content provided on this blog is for informational purposes only. The reason for this blog is to document the things that I have learned reading blogs/books/self about Windows Server. We do not own any responsibility for the events that may happen because of using/applying the code/concepts mentioned in this blog.

If you like above content then feel free to share it with your colleague  and friends. If you wish to improve the content the feel free to comment and contact me directly.

#disaster-recovery, #log-shipping, #sql