How to safely decommission spare Exchange 2016 Server from organization.

There are many occasion when an exchange admin needs to decommission an un-used Exchange server from organization that might be installed by mistake and doesn’t full fill business requirement. There are lots of blogs or link present on internet that may guide you to do via ADSI Edit but the preferred way is to do gracefully using front end (exchange PowerShell OR GUI). Following PS script is verified in Exchange 2016 test environment for the same purpose.

$Exch2k16="exmbx2" #Exchange Server name that needs to be removed. 
$DestDb="Mailbox Database 1529611009" #Destination mailbox Database where the mailboxes of server will be moved. 
Get-MoveRequest|Remove-MoveRequest -Confirm:$False #Delete move mailbox request if exists
Get-Mailbox -Server $Exch2k16 |New-MoveRequest -TargetDatabase $DestDb #Move mailboxes to other database. 
Get-Mailbox -AuditLog -Server $Exch2k16 |New-MoveRequest -TargetDatabase $DestDb  #Move Auditing mailbox
Get-Mailbox -Monitoring -Server $Exch2k16 |New-MoveRequest -TargetDatabase $DestDb #Move health/monitoring db
Get-Mailbox -Archive -Server $Exch2k16 |New-MoveRequest -TargetDatabase $DestDb #Move all archives
Get-Mailbox -PublicFolder -Server $Exch2k16 |New-MoveRequest -TargetDatabase $DestDb #Move Publicfolder mailboxes
Get-MailboxDatabase -Server $Exch2k16 |Remove-MailboxDatabase -Confirm:$False #Remove MailboxDB
Get-ReceiveConnector -Server $Exch2k16|Remove-ReceiveConnector -Confirm:$False #Remove Receive connector.  

Now uninstall exchange binaries using following command.

setup /m:uninstall

ExchangeUn.jpg

 

Advertisements

#exchange, #movemailbox

Overview of Memory-Optimized Table.

We all know, retrieving data from physical memory (AKA RAM) is faster than the data saved in physical disk (HDD). Memory-optimized tables save & retrieve data into/from physical memory rather than Hard-disk. This could be excellent feature to solve lot of use cases such as online game where only end results matters.

This feature was introduced with SQL version 2014 and continuous improvement are being done in consecutive releases. In lot of places, you would see this feature named as “Hekaton” OR “In-Memory optimization”.

Following a short demo may help to develop your understanding regarding Memory table & performance comparison with disk based table.

-- Create InMemoryTableDemo database. 
USE [master]
CREATE DATABASE [InMemoryTableDemo]
GO
-- Create Memory optimized File Group.
USE [master]
ALTER DATABASE [InMemoryTableDemo]
ADD FILEGROUP MemoryGroupFG CONTAINS MEMORY_OPTIMIZED_DATA
-- Add the container for file group. 
ALTER DATABASE [InMemoryTableDemo]
ADD FILE (NAME = AWMemFile, FILENAME = N'F:\FGData\FG1') TO FILEGROUP MemoryGroupFG
GO

In my following example, I have set ‘Durability’ to ‘schema_only’ which means only the schema will be recoverable in case of crash as the rows will remain in memory. The other option is ‘Schema_and_Data’ that protect table schema as well rows, the copy of data will also save in HDD.

-- Create Memory Based Table.
USE [InMemoryTableDemo]
CREATE TABLE TableInMemory
(
       ID INT CONSTRAINT [pk_TableInMemory_ID] PRIMARY KEY NONCLUSTERED IDENTITY (1,1), 
       EMPID INT,
       CharData VARCHAR(100)
) WITH ( MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY )
GO

Let’s create a disk based table with same columns.

-- Create Disk Based Table.
USE [InMemoryTableDemo]
CREATE TABLE TableInDisk
(      
       ID INT CONSTRAINT [pk_TableInDisk_ID] PRIMARY KEY NONCLUSTERED IDENTITY (1,1), 
       EMPID INT, 
       CharData VARCHAR(100)
)
GO

Now we are inserting 100k rows and it takes only (approximately) 4 seconds to insert in memory based table whereas disk based table takes (approximately) 200 seconds.

-- Test 100,000 inserts into a Memory Based Table 
DECLARE @start DATETIME = GETDATE();
DECLARE @count INT = 1;
WHILE @count < 100000
BEGIN
  INSERT INTO TableInMemory VALUES 
       (@count, 'stuff');
        SET @count = @count + 1;
END
SELECT DATEDIFF(s, @start, GETDATE()) AS [Memory Insert]
GO

-- Test 100,000 inserts into a Disk Based Table
DECLARE @start DATETIME = GETDATE();
DECLARE @count INT = 1;
WHILE @count < 100000
BEGIN
  INSERT INTO TableInDisk
VALUES 
       (@count, 'stuff');
       SET @count = @count + 1;
END
SELECT DATEDIFF(s, @start, GETDATE()) AS [Memory Insert]
GO

first

Let’s test the retrieval of rows. Run following 3 queries together while enabling ‘Include actual execution plan’ OR Live query statistics‘.

-- Run following 3 queries together while enabling 'include actual execution plan' OR 'Live querystatistics' 
USE [InMemoryTableDemo]
SELECT * FROM TableInDisk
SELECT * FROM TableInMemory
GO

Comparsion.jpg

You would notice Memory table takes (approximately) 16 % resources as compare to disk based table that takes 84 % of resources. Additionally storage & I/O Cost columns will give the details of they type of query.

More details on Memory optimized table.

#hekaton, #in-memory-optimization, #sql2016, #storage

How to join VMware ESX / VCenter to Active Directory domain and manage using domain account.

Most of organization uses AD infrastructure for authentication and administrator the resources of organization. ESX servers and VSphere can also be joined to AD domain and administrator using domain account.

Here are the steps to do the same.

Confirm the appropriate DNS address & domain name is configured for ESX.  Login to ESX using VSphere client, select Configuration tab, select ‘DNS and routing’. If correct DNS IP is not configured then click on ‘Properties’ and save valid details.

1.jpg

Click on Authentication Services, select Properties to edit current settings, select directory service type as Active Directory. Type domain name then join to domain by giving domain admin credential.

2.jpg

Once ESX joined in the domain, you should be able to see computer account listed in ADUC (Active directory user’s & computer)

3.jpg

Now create a group called ‘VMWare Admin’ & a user ‘VAdmin’, this user will be the member of ‘VMware admin’ group.

4.jpg

In your VSphere client, select Permission tab, right click on empty space select ‘Add Permission’.

5.jpg

Click on ADD, from drop down menu select the domain, you should be able to see AD objects, select VMware admin group.

6.jpg

Select the role you wish to give to map with AD group ‘VMware admin’.

7.jpg

Once it added successfully, you should be able to login in ESX Vsphere client using domain credential. You can check box ‘use windows session credential’ if you wants to login using current windows login.

8.jpg

If you are managing ESX using VMWare VCenter then open VCenter page, go to ‘Administration’ page, select ‘Configuration’, In ‘Identity source’ option select ‘Active Directory (integrated windows authentication)’, Type appropriate domain name and click on OK.

9.jpg

Click on ‘Global Permission’ Tab, click on ‘+’ option then Add AD group ‘VMware admin’.

11.jpg

Select the role you wish to map with to ‘VMware admin’ AD group.

12.jpg

Now you should be able to login to VCenter using windows & AD authentication.

13.jpg

#active-directory, #vmware, #windows

How to create ISCSI shared disk in windows server for ESX VSphere environment.

This my first blog for ESX/VSphere where the requirement was to create ISCSI data store for HA & DRS (High Availability & Distributed Resource Scheduler).  We created storage from windows server instead of having a dedicated storage appliance/VM. The purpose of such configuration is only for learning & testing the HA/DRS activity.

This configuration is designed on windows 2012 R2 with ESX 6.0.

On windows server, install ‘File and ISCSI services’ and make sure ‘ISCSI Target server’ is selected.

1.jpg

Once the role is installed then select ‘File and storage services’ feature and then ‘ISCSI’.

Right click on empty space and select ‘New ISCSI Virtual Disk’ as shown in above example. In my screenshot one of the volume is created and I am creating a new one.

2.jpg

Now select the partition where you wish to place the Virtual Disk.

3.jpg

Give the appropriate Name and Description and confirm the path of SCSI target.

4.jpg

Give the appropriate size & select type of disk as mentioned below. In my example I have selected ‘Dynamically expending’ as I don’t need to worry about space.

5.jpg

You need to create target that will be the list of ESX servers participating in HA/DRS and access VHD using ISCSI protocol.

6.jpg

In my example, I have used DNS name of ESX servers as initiator but it can also be access using IQN, IP address OR MAC address.

16.jpg

In following screenshot, there are two ESX servers are added but you have even more.

8.jpg

Select type of authentication, I am leaving it blank to avoid confusion.

9.jpg

In next page, you can confirm the setting you have selected and the result of different section will be available for you.10.jpg

Now the shared ISCSI disk is ready, you can add this in ESX server using VSphere console. Select ESX server \ Manage \Storage \ Storage Adaptor \Target \ Add Target.

11.jpg

Rescan storage so all newly attache drives are visible.

12.jpg

Now you should be able to see the path of all available SCSI share disk.

13.jpg

SCSI disk will also be available as storage devices.

14.jpg

In your windows server, you would notice the target status as ‘connected’.

15.jpg

#disaster-recovery, #esx, #scsi, #storage, #vmware

Overview of SQL 2016 ‘Always Encrypted’ Security feature.

In couple of my last blogs, we discussed about various security options with SQL 2016. Now in this blog, we are going to discuss & demonstrate one other important security feature of SQL 2016 that is called as ‘Always Encrypted’.

The legacy features provide encryption either on network layer (SSL) OR data files (MDF) OR Backup files(BAK) and encryption happens for all data OR none. But ‘Always encrypted’ uses client application layer via ADO.Net to encrypt/decrypt column specific data. You may have table where ‘FirstName’ & ‘LastName’ is in plain text but the ‘passport no’ is encrypted. The ‘passport no’ will always be in cipher text during transition via network and will save in encrypted format in data files OR backup files. This feature also available for SQL azure so even Microsoft will see ecrypted data.

Let’s use following steps to demonstrate how Always encrypted can be configured & use in test/production environments of SQL 2016.

---Create AlwaysEncrptedDemo database. 
USE [master]
CREATE DATABASE AlwaysEncryptedDemo
GO
---Create table with 6 columns.   
USE [AlwaysEncryptedDemo]
CREATE TABLE EmployeeData
(
       EMPID INT Primary key,
       FirstName VARCHAR(100), 
       LastName VARCHAR(100),
       MobileNo BIGINT, 
       SALARY BIGINT,
       PassportNo VARCHAR(100)
)
GO

-- Insert 5 records. 
USE [AlwaysEncryptedDemo]
INSERT INTO EmployeeData VALUES
       (1, 'Pradeep', 'Papnai', 9595046647, 50000, 'ATNPP12345'), 
       (2, 'Vikram', 'Jadhav', 8088145698, 52000, 'YAG8383P'),
       (3, 'Darshan', 'Bandodkar', 9198234567, 51000, 'DD123453'),
       (4, 'Ajay', 'Jain', 875691232, 55000, 'AJ123JK12'),
       (5, 'Yogesh', 'Ghute', 8787453212, 49000, 'PT9876KA12')

Normal retrieval of rows.

-- Retrieve all records, our next action will encrypt last three columns those are confidential. 
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo
FROM EmployeeData

FirstSelect

Configure Always Encrypted on columns of a table

Please note these step can also be done using SQL query & Powershell but it’s recommended to do it using wizard for better understanding.

  1. Expand ‘AlwaysEncrptedDemo’ database, Right click on ‘EmployeeData’, select ‘Encrypt columns’
  2. Click NEXT on introduction page.
  3. Check on columns ‘MobileNo, Salary, PassportNo. Select EncrptionType click NEXT.

There are two type of encryption is supported:-

Deterministic – Always encrypts to the same cipher text means decrypted data will always be the same and data can be indexed.

Randomized – It’s consider as more secure as the cipher text will always be different.  The data of this column cannot be indexed.

  1. Key Store provider = Windows Certificate Store, Master key source = current user, click NEXT.
  2. Select ‘Proceed to finish now’, although you can generate PowerShell next time you do it, click NEXT.
  3. Verify the choice you made, click ‘NEXT’.
  4. Encryption might take time due to size of column present in table. In production system, it is usually performed off peak hours.

Here is the slide show for above mentioned steps:-

This slideshow requires JavaScript.

-- Now if you run select query again the results for column MobileNo, Salary, PassportNo will be encrypt
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName, MobileNo, SALARY, PassportNo
FROM EmployeeData

EncryptSelect

If you wish to see encrypted data using SQL Management studio then the connection string should include following line ‘column encryption setting=enabled’

Right click on query windows, connection-change connection, Additional Connection Parameters

SSMS_Connection_string

-- Now the columns can be seen in without encryption. 
USE [AlwaysEncryptedDemo]
SELECT EMPID, FirstName, LastName, MobileNo, SALARY, PassportNo
FROM EmployeeData

FirstSelect

Well this is not the end of story. If you connect to this SQL server using SSMW from other machine you will notice columns are encrypted even though connection string encrypted is enabled, it’s because you don’t have certificate for encryption.

Always encrypted feature maintain two keys :-

Column Master Key – this is an encryption key that protects the column encryption key(s). You must have at least one master key before encrypting any columns.

Column Encryption Key – this is the encryption key that actually protects that encrypted columns.

Keys

You can either use existing keys OR create a new one. In order to copy the certificate, use following steps.

Click on ‘RUN’, Type ‘MMC’ and press the ENTER key. On the ‘File’ menu, click on ‘Add/Remove Snap In’.
Select ‘Certificates’. Select ‘My user account’ and click ‘Finish’ and ‘OK’.
Expand ‘Personal’, then ‘Certificate’, Select ‘Always Encrypted auto certificate’

Certificate 1

You can export this certificate to other machine where ever client OR application connects to SQL servers for encryption purpose. Here is example how connection string can be used with PowerShell.

Use case without encryption connection string

#Change SQL server name
$connectionString = "Data Source=SQL1;Integrated Security=true;Initial Catalog=AlwaysEncryptedDemo;"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query = “SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo FROM EmployeeData”
$command = $connection.CreateCommand()
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object "System.Data.DataTable"
$table.load($result)
$table | format-table 
$connection.Close()

Powershell_Without_S.jpg

Use case with encryption connection string

$connectionString = "Data Source=SQL1;Integrated Security=true;Initial Catalog=AlwaysEncryptedDemo;column encryption setting=enabled"
$connection = New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query = “SELECT EMPID, FirstName, LastName,MobileNo, SALARY, PassportNo FROM EmployeeData”
$command = $connection.CreateCommand()
$command.CommandText = $query
$result = $command.ExecuteReader()
$table = new-object "System.Data.DataTable"
$table.load($result)
$table | format-table 
$connection.Close()

With_connection_string.jpg

More Details

#sql, #sql-security, #sql2016

Overview of Column Store Indexes (CSI).

Since SQL 2012, Microsoft enhancing the capabilities of column store indexes that can dramatically enhance the performance of SQL queries. You may ask, what is column store indexes & how it is different than traditional row store index. Here is the example to clear doubts.

Here is the small table.

ID Name Salary
1 emp1 1000
2 emp2 1200
3 emp3 1100

 Data will be saved in following format when

A.    Row Store index is used
Page 1  (1, emp1, 1000),
Page 2  (2, emp2, 1200),
page3  (3, emp3, 1100)

B.    Column Store index is used
Page 1 (1, 2, 3),
Page 2 (emp1, emp2, emp3),
Page 3 (1000, 1200, 1100)

Now without discussing a lot, let see the demonstration to see difference in performance.

-- Creating Sample demo database. 
USE [master]
CREATE DATABASE ColumnStoreDemo
GO

-- Creating Table to demo CS indexes
USE [ColumnStoreDemo]
CREATE TABLE CSTable
(
EMPID INT IDENTITY(1,1),
FirstName VARCHAR(100),
LastName VARCHAR (100),
City VARCHAR (100),
ProductGroup VARCHAR (100),
SALARY BIGINT,
JoingDate DATE DEFAULT GETDATE()
)
GO

-- Creating a row store clustered & non-clustered indexes. 
CREATE CLUSTERED INDEX CSTable_CI_EMPID ON CSTable (EMPID)
CREATE NONCLUSTERED INDEX CSTable_CI_FirstName ON CSTable (FirstName)
GO

-- Creating a non clustered column store index. 
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_CSTable_NC_ColumnStore] ON CSTable (EMPID, FirstName, LastName, City, ProductGroup, JoingDate
)
GO
-- Query to show all indexes of a given table.
SELECT
ind.name, 
obj.name, 
ind.type_desc
FROM sys.indexes ind INNER JOIN sys.objects obj ON ind.object_id = obj.object_id
WHERE obj.name like 'CSTable'
GO

Indexes

-- Inserting 7 unique rows in 10k batches. 
INSERT INTO CSTable (Firstname, LastName, City, ProductGroup, SALARY)
VALUES
('Pradeep', 'Papnai', 'Pune', 'EV', 1000),
('Vikram', 'Jadhav', 'Pune', 'EV', 2500), 
('Rahul', 'Sharma', 'Pune', 'Antivirus', 1500),
('Rahul', 'Gupta', 'Pune', 'Exchange', 1900),
('Ajay', 'Gadge', 'Mumbai', 'EV', 2100), 
('Ajay', 'Jain', 'Mumbai', 'EV', 1600), 
('Rahul', 'Katiyar', 'Delhi', 'EV', 1660)
GO 10000

Run following two commands together while including ‘actual execution plan

USE [ColumnStoreDemo]
SELECT COUNT(*), CITY, PRODUCTGROUP FROM CSTable
GROUP BY CITY, PRODUCTGROUP

SELECT COUNT(*), CITY, PRODUCTGROUP FROM CSTable
GROUP BY CITY, PRODUCTGROUP
OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) ---We are ignoring column store index so it fail to row clustered indexes. 
GO

Example1

Column Store Index (CSI) feature introduce with SQL 2012 and significant improvement made in subsequent version of SQL servers (2014/2016). CSI uses xVelocity technology that was based on Vertipaq engine. Column store indexes store data column wise, each page stores data from a single column. The traditional row store index (heap/cluster/non clustered) save data for each row in data pages.

During completion, query optimized automatically considers & priorities column store indexes unless we ignore it manually (with key word “OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX))

Column store increases the performance by reducing the I/O as it…
–    Read less data that is highly compressed.
–    process data in units called “batches” from disk to memory.
–    Segment elimination (contain minimum & maximum value for Integer datatype)

Row Group  is the group of 1 million (1,048,576) logically contiguous rows.
Column Segment contains values from one column for a set of rows.

Architecture
Segments are compressed and Segment is unit of transfer between disk and memory.

Architecture2
SQL 2012 offering for ColumnStore
–    A Table (Heap or BTree) can have only one NonClustered ColumnStore Index.
–    A Table with NonClustered ColumnStore Index becomes readonly and cannot be updated.
–    The NonClustered ColumnStore Index uses Segment Compression for high compression of Columnar data.

SQL 2014 offering for ColumnStore.
–    You can create one Clustered ColumnStore Index on a Table, and no further Indexes can be created.
–    A Table with Clustered ColumnStore Index can be updated with INSERT/UPDATE/DELETE operations.
–    A Table with NonCluster ColumnStore index make entire table as readonly.
–    Both Clustered & NonClustered ColumnStore Index has new Archival Compression option (COLUMNSTORE_ARCHIVE) to further compress the data.

SQL 2016 offering for column store.
–    A Table can NonClustered ColumnStore Index with read/write capablities.
–    A Clustered ColumnStore Index table can have one or more NonClustered RowStore Indexes.
–    Clustered ColumnStore Index can now be unique (Primary & Foreign key is allowed)
–    Ability to create Filtered NonClustered ColumnStore Index by specifying a WHERE clause to it.

More Details about this topic
Clustered Column store Indexes
MSDN

#columnstoreindex, #sql-performance, #sql2016

Row Level Security (RLS)

SQL 2016 comes up with a new developer feature called Row Level Security (RLS) that use predicate based security feature for controlling row access on the attribute of user login. Such as an example below:

A DB contain details of all books, author and associated cost. When an author supply his credential he should be able to access only books he wrote with total cost. Such as following query and output.

RLS

RLS works on following two key concepts for implementing security.

Predicate function – An inline table-valued function that implements access control logic (for example, returning a row depending on the principal_name, role, or other attributes of the calling user).

Security predicate – Glues a predicate function to a table (for example, applying a function that checks for rep name or the rep manager role to the Accounts table).

Let’s use following code and screenshot for better understanding:-

--Create sample database. 
USE [master]
CREATE DATABASE RLSDEMO
GO

--Create 'EmployeeData' table. 
USE [RLSDEMO]
CREATE TABLE EmployeeData 
(
       EMPID INT PRIMARY KEY, 
       Firstname VARCHAR(50),
       LastName VARCHAR(50)
)
GO

----Create another table 'books'. 
USE [RLSDEMO]
CREATE TABLE Books
(
       BOOKID INT PRIMARY KEY,
       BookName VARCHAR(100),
       Cost INT,
       AuthorID INT FOREIGN KEY REFERENCES EmployeeData (EMPID)
)      
GO

----Insert records in employee table. 
USE [RLSDEMO]
INSERT INTO EmployeeData VALUES
(1, 'Vikram', 'Jadhav'), 
(2, 'Pradeep', 'Papnai'),
(3, 'Darshan', 'Bandodkar'), 
(4, 'Ajay', 'Jain')
GO 

----Insert records in books table. 
USE [RLSDEMO]
INSERT INTO Books VALUES
(1, 'Troubleshooting Vault Cache', 5000, 1),
(2, 'Troubleshooting Exchange', 12000, 1),
(3, 'PST migration black book', 5000, 1),
(4, 'EV & MAC client', 5000, 2),
(5, 'EV DB on SQL 2K16', 11000, 2), 
(6, 'Storage overview', 4000, 2),
(7, 'FSA crash overview', 6000, 3),
(8, 'FSAUtility basis', 3000, 3),
(9, 'SSMW-Server migration', 1500, 3),
(10, 'Dtrace fundamentals', 2000, 1), 
(11, 'Exchange mail flow', 4000, 1),
(12,'Networking tools NetMON-wireshark',3400, 3)
GO

----Create view for select query simplicity. 
CREATE VIEW BooksCost
AS SELECT 
       EmployeeData.Firstname,
       EmployeeData.LastName,
       Books.BOOKID,
       Books.BookName, 
       Books.Cost
FROM EmployeeData
INNER JOIN Books ON Books.AuthorID=EmployeeData.EMPID
GO

--Create a ‘Manager’ and 3 other users those are also employees and the author of various books. We will assume first name as loginid. 
CREATE USER Manager WITHOUT LOGIN
CREATE USER vikram WITHOUT LOGIN
CREATE USER pradeep WITHOUT LOGIN
CREATE USER darshan WITHOUT LOGIN
GO

---- Grant full access to the manager user, and read-only access to the remaining users(authors). 
GRANT SELECT, INSERT, UPDATE, DELETE ON BooksCost TO Manager
GRANT SELECT ON BooksCost TO vikram
GRANT SELECT ON BooksCost TO pradeep
GRANT SELECT ON BooksCost TO darshan
GO

--- Create a new schema for the predicate function, it NOT mandate but recommended. 
CREATE SCHEMA SEC
GO

--- Create the predicate function
CREATE FUNCTION sec.fn_Author_Predicate(@Username AS varchar(50))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
-- Return 1 if the connection username matches the @Username parameter
-- or if the connection username is Manager
SELECT
1
AS
Author_predicate_result
WHERE
@Username=USER_NAME()OR USER_NAME()='Manager'
GO

-- Create and enable a security policy adding the function as a filter predicate
USE [RLSDEMO]
CREATE SECURITY POLICY sec.EmployeePolicyFilter
       ADD FILTER PREDICATE sec.fn_Author_Predicate(Firstname) 
       ON dbo.EmployeeData
       WITH (STATE = ON)
GO

Test 1:-

--- As vikram, we get just the rows he own with total books and cost
EXECUTE AS USER = 'vikram'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

vikram.jpg

Test 2:-

--- As pradeep, we get just the rows he own with total books and cost
EXECUTE AS USER = 'pradeep'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST (INR)', sum(COST) FROM BooksCost
REVERT
GO

pradeep.jpg

Test 3:-

--- As darshan, we get just the rows he own with total books and cost
EXECUTE AS USER = 'darshan'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

darshan.jpg

Test 4:-

--- As Manager, we get all rows
EXECUTE AS USER = 'Manager'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

Manager.jpg

Disable / Enable RLS

--- We can toggle the security policy on and off
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = OFF)

---Enable again
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = ON)

More details from MSDN

#rls, #sql, #sql2016