Overview of Column Store Indexes (CSI).

Since SQL 2012, Microsoft enhancing the capabilities of column store indexes that can dramatically enhance the performance of SQL queries. You may ask, what is column store indexes & how it is different than traditional row store index. Here is the example to clear doubts.

Here is the small table.

ID Name Salary
1 emp1 1000
2 emp2 1200
3 emp3 1100

 Data will be saved in following format when

A.    Row Store index is used
Page 1  (1, emp1, 1000),
Page 2  (2, emp2, 1200),
page3  (3, emp3, 1100)

B.    Column Store index is used
Page 1 (1, 2, 3),
Page 2 (emp1, emp2, emp3),
Page 3 (1000, 1200, 1100)

Now without discussing a lot, let see the demonstration to see difference in performance.

-- Creating Sample demo database. 
USE [master]
CREATE DATABASE ColumnStoreDemo
GO

-- Creating Table to demo CS indexes
USE [ColumnStoreDemo]
CREATE TABLE CSTable
(
EMPID INT IDENTITY(1,1),
FirstName VARCHAR(100),
LastName VARCHAR (100),
City VARCHAR (100),
ProductGroup VARCHAR (100),
SALARY BIGINT,
JoingDate DATE DEFAULT GETDATE()
)
GO

-- Creating a row store clustered & non-clustered indexes. 
CREATE CLUSTERED INDEX CSTable_CI_EMPID ON CSTable (EMPID)
CREATE NONCLUSTERED INDEX CSTable_CI_FirstName ON CSTable (FirstName)
GO

-- Creating a non clustered column store index. 
CREATE NONCLUSTERED COLUMNSTORE INDEX [IX_CSTable_NC_ColumnStore] ON CSTable (EMPID, FirstName, LastName, City, ProductGroup, JoingDate
)
GO
-- Query to show all indexes of a given table.
SELECT
ind.name, 
obj.name, 
ind.type_desc
FROM sys.indexes ind INNER JOIN sys.objects obj ON ind.object_id = obj.object_id
WHERE obj.name like 'CSTable'
GO

Indexes

-- Inserting 7 unique rows in 10k batches. 
INSERT INTO CSTable (Firstname, LastName, City, ProductGroup, SALARY)
VALUES
('Pradeep', 'Papnai', 'Pune', 'EV', 1000),
('Vikram', 'Jadhav', 'Pune', 'EV', 2500), 
('Rahul', 'Sharma', 'Pune', 'Antivirus', 1500),
('Rahul', 'Gupta', 'Pune', 'Exchange', 1900),
('Ajay', 'Gadge', 'Mumbai', 'EV', 2100), 
('Ajay', 'Jain', 'Mumbai', 'EV', 1600), 
('Rahul', 'Katiyar', 'Delhi', 'EV', 1660)
GO 10000

Run following two commands together while including ‘actual execution plan

USE [ColumnStoreDemo]
SELECT COUNT(*), CITY, PRODUCTGROUP FROM CSTable
GROUP BY CITY, PRODUCTGROUP

SELECT COUNT(*), CITY, PRODUCTGROUP FROM CSTable
GROUP BY CITY, PRODUCTGROUP
OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) ---We are ignoring column store index so it fail to row clustered indexes. 
GO

Example1

Column Store Index (CSI) feature introduce with SQL 2012 and significant improvement made in subsequent version of SQL servers (2014/2016). CSI uses xVelocity technology that was based on Vertipaq engine. Column store indexes store data column wise, each page stores data from a single column. The traditional row store index (heap/cluster/non clustered) save data for each row in data pages.

During completion, query optimized automatically considers & priorities column store indexes unless we ignore it manually (with key word “OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX))

Column store increases the performance by reducing the I/O as it…
–    Read less data that is highly compressed.
–    process data in units called “batches” from disk to memory.
–    Segment elimination (contain minimum & maximum value for Integer datatype)

Row Group  is the group of 1 million (1,048,576) logically contiguous rows.
Column Segment contains values from one column for a set of rows.

Architecture
Segments are compressed and Segment is unit of transfer between disk and memory.

Architecture2
SQL 2012 offering for ColumnStore
–    A Table (Heap or BTree) can have only one NonClustered ColumnStore Index.
–    A Table with NonClustered ColumnStore Index becomes readonly and cannot be updated.
–    The NonClustered ColumnStore Index uses Segment Compression for high compression of Columnar data.

SQL 2014 offering for ColumnStore.
–    You can create one Clustered ColumnStore Index on a Table, and no further Indexes can be created.
–    A Table with Clustered ColumnStore Index can be updated with INSERT/UPDATE/DELETE operations.
–    A Table with NonCluster ColumnStore index make entire table as readonly.
–    Both Clustered & NonClustered ColumnStore Index has new Archival Compression option (COLUMNSTORE_ARCHIVE) to further compress the data.

SQL 2016 offering for column store.
–    A Table can NonClustered ColumnStore Index with read/write capablities.
–    A Clustered ColumnStore Index table can have one or more NonClustered RowStore Indexes.
–    Clustered ColumnStore Index can now be unique (Primary & Foreign key is allowed)
–    Ability to create Filtered NonClustered ColumnStore Index by specifying a WHERE clause to it.

More Details about this topic
Clustered Column store Indexes
MSDN

Advertisements

#columnstoreindex, #sql-performance, #sql2016

Row Level Security (RLS)

SQL 2016 comes up with a new developer feature called Row Level Security (RLS) that use predicate based security feature for controlling row access on the attribute of user login. Such as an example below:

A DB contain details of all books, author and associated cost. When an author supply his credential he should be able to access only books he wrote with total cost. Such as following query and output.

RLS

RLS works on following two key concepts for implementing security.

Predicate function – An inline table-valued function that implements access control logic (for example, returning a row depending on the principal_name, role, or other attributes of the calling user).

Security predicate – Glues a predicate function to a table (for example, applying a function that checks for rep name or the rep manager role to the Accounts table).

Let’s use following code and screenshot for better understanding:-

--Create sample database. 
USE [master]
CREATE DATABASE RLSDEMO
GO

--Create 'EmployeeData' table. 
USE [RLSDEMO]
CREATE TABLE EmployeeData 
(
       EMPID INT PRIMARY KEY, 
       Firstname VARCHAR(50),
       LastName VARCHAR(50)
)
GO

----Create another table 'books'. 
USE [RLSDEMO]
CREATE TABLE Books
(
       BOOKID INT PRIMARY KEY,
       BookName VARCHAR(100),
       Cost INT,
       AuthorID INT FOREIGN KEY REFERENCES EmployeeData (EMPID)
)      
GO

----Insert records in employee table. 
USE [RLSDEMO]
INSERT INTO EmployeeData VALUES
(1, 'Vikram', 'Jadhav'), 
(2, 'Pradeep', 'Papnai'),
(3, 'Darshan', 'Bandodkar'), 
(4, 'Ajay', 'Jain')
GO 

----Insert records in books table. 
USE [RLSDEMO]
INSERT INTO Books VALUES
(1, 'Troubleshooting Vault Cache', 5000, 1),
(2, 'Troubleshooting Exchange', 12000, 1),
(3, 'PST migration black book', 5000, 1),
(4, 'EV & MAC client', 5000, 2),
(5, 'EV DB on SQL 2K16', 11000, 2), 
(6, 'Storage overview', 4000, 2),
(7, 'FSA crash overview', 6000, 3),
(8, 'FSAUtility basis', 3000, 3),
(9, 'SSMW-Server migration', 1500, 3),
(10, 'Dtrace fundamentals', 2000, 1), 
(11, 'Exchange mail flow', 4000, 1),
(12,'Networking tools NetMON-wireshark',3400, 3)
GO

----Create view for select query simplicity. 
CREATE VIEW BooksCost
AS SELECT 
       EmployeeData.Firstname,
       EmployeeData.LastName,
       Books.BOOKID,
       Books.BookName, 
       Books.Cost
FROM EmployeeData
INNER JOIN Books ON Books.AuthorID=EmployeeData.EMPID
GO

--Create a ‘Manager’ and 3 other users those are also employees and the author of various books. We will assume first name as loginid. 
CREATE USER Manager WITHOUT LOGIN
CREATE USER vikram WITHOUT LOGIN
CREATE USER pradeep WITHOUT LOGIN
CREATE USER darshan WITHOUT LOGIN
GO

---- Grant full access to the manager user, and read-only access to the remaining users(authors). 
GRANT SELECT, INSERT, UPDATE, DELETE ON BooksCost TO Manager
GRANT SELECT ON BooksCost TO vikram
GRANT SELECT ON BooksCost TO pradeep
GRANT SELECT ON BooksCost TO darshan
GO

--- Create a new schema for the predicate function, it NOT mandate but recommended. 
CREATE SCHEMA SEC
GO

--- Create the predicate function
CREATE FUNCTION sec.fn_Author_Predicate(@Username AS varchar(50))
RETURNS TABLE
WITH SCHEMABINDING
AS
RETURN
-- Return 1 if the connection username matches the @Username parameter
-- or if the connection username is Manager
SELECT
1
AS
Author_predicate_result
WHERE
@Username=USER_NAME()OR USER_NAME()='Manager'
GO

-- Create and enable a security policy adding the function as a filter predicate
USE [RLSDEMO]
CREATE SECURITY POLICY sec.EmployeePolicyFilter
       ADD FILTER PREDICATE sec.fn_Author_Predicate(Firstname) 
       ON dbo.EmployeeData
       WITH (STATE = ON)
GO

Test 1:-

--- As vikram, we get just the rows he own with total books and cost
EXECUTE AS USER = 'vikram'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

vikram.jpg

Test 2:-

--- As pradeep, we get just the rows he own with total books and cost
EXECUTE AS USER = 'pradeep'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOK COUNT', count(bookid), 'TOTAL COST (INR)', sum(COST) FROM BooksCost
REVERT
GO

pradeep.jpg

Test 3:-

--- As darshan, we get just the rows he own with total books and cost
EXECUTE AS USER = 'darshan'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

darshan.jpg

Test 4:-

--- As Manager, we get all rows
EXECUTE AS USER = 'Manager'
SELECT Firstname, LastName, BOOKID, BookName, Cost FROM BooksCost 
UNION ALL
SELECT NULL, 'BOOKCOUNT', count(bookid), 'TOTAL COST(INR)', sum(COST) FROM BooksCost
REVERT
GO

Manager.jpg

Disable / Enable RLS

--- We can toggle the security policy on and off
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = OFF)

---Enable again
ALTER SECURITY POLICY sec.EmployeePolicyFilter WITH (STATE = ON)

More details from MSDN

#rls, #sql, #sql2016

Dynamic Data Masking (DDM)

In your SQL table ‘Employee’, you may have sensitive information such as credit card, passport no, mobile phone…etc and you don’t want to expose the sensitive information to non-privileged users but at the same time you wish to have access to remaining column such as EmployeeID, location, Employee name to every user having database/table level permission.

A new feature “Dynamic Data Masking (DDM)” introduced with SQL 2016 may solve above requirement without having complex set of front end OR DB level coding. DDM limits sensitive data exposure by masking it to non-privileged users.

There are four type of masking can be configure during table creation.

Default Masks the data according to the field data type; if the field data type is binary, varbinary or image, a single byte of binary value 0 will be used to mask that field. For the date and time data types, the 01.01.1900 00:00:00.0000000 value will be used to mask that date field. If the data type of the masked field is one of the numeric data types, a zero value will be used to mask that field.

Partial Define your own mask for the specified field by exposing the first and last letters defined by the prefix and suffix and add a padding that will be shown in the middle in the form of prefix

Email Mask the fields that store the email address.

Random Mask any numeric data type by replacing the original value with a random value within the range specified in that function.

A table definition will look like below.

/*First create a dedicate database*/
USE [master]
CREATE DATABASE DDMDEMO
GO

/*Create table with Dynamic Data Masking enabled*/
USE [DDMDEMO]
CREATE TABLE EmployeeData
(
EMPID INT   MASKED WITH (FUNCTION ='random(1, 12)') IDENTITY(1,1), ---Randam mask, int will be replace by any random valube between 1 to 12
FIRSTNAME VARCHAR(100), --Leave this column without masking. 
LASTNAME VARCHAR (100) MASKED WITH (FUNCTION='partial(2, "...", 2)') NULL, ---Example of random mask, first two character ofstart & end will show and middle characters will be masked.  
PHONENo VARCHAR(50) MASKED WITH (FUNCTION='default()') NULL, ---Example of default mask.
EMAIL VARCHAR(100) MASKED WITH (FUNCTION = 'email()') NULL ---Example of email masking function
)
GO

Populate table with few rows.

USE [DDMDEMO]
INSERT INTO EmployeeData 
 VALUES
('Pradeep', 'Papnai', '919595852246', 'papnai.pradeep@gmail.com'),
('Darhan', 'Bandodkar','91959568524','darshan.bandokar@hotmail.com'),
('Vikram', 'Jadhav', '915682147562', 'Vikram.jadhav@hotmail.com'),
('Ajay', 'Jain', '9545684575','ajay.jain@yahoo.com'),
('Yogesh','Ghute', '959586575826', 'yogesh.ghute@city.com')
GO

By default, DBO will have unmask permission and he can retrieve all columns.

USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
GO

dboretrive.jpg

Now create a ‘TestUser’ without login and give ‘Select’ permission on the table.

CREATE USER TestUser WITHOUT LOGIN
GRANT SELECT ON EmployeeData TO TestUser
GO

Retrieve rows using ‘Testuser’ credential.

EXECUTE AS USER = 'TestUser'
USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
REVERT
GO

The output would be same as you might have expected while defining masking for columns ‘LastName’, ‘Phoneno’ & ‘Email’.

userretrive.jpg

By default, DBO have un-mask privilege and this permission can be assign to any other legitimate user. Let’s do the same using query.

GRANT UNMASK TO TestUser
EXECUTE AS USER = 'TestUser'
USE [DDMDEMO]
SELECT EMPID, FIRSTNAME, LASTNAME, PHONENo, EMAIL FROM EmployeeData 
REVERT
GO

dboretrive

Few things to note:-

  • Creating a mask on a column does not prevent updates to that column. So although users receive masked data when querying the masked column, the same users can update the data if they have write permissions.
  • A masking rule cannot be defined for the column types such as Encrypted columns OR FILESTREAM
  • Use the masked_columns view to query for table-columns that have a masking function applied to them
USE DDMDEMO
SELECT c.name, tbl.name as table_name, c.is_masked, c.masking_function  
FROM sys.masked_columns AS
JOIN sys.tables AS tbl   
    ON c.[object_id] = tbl.[object_id]  
WHERE is_masked = 1; 

dmv.jpg

More details about DDM

#ddm, #dymamic-data-masking, #sql, #sql-mask, #sql2016

SQL 2016 Temporal Tables.

Here is the situation for SQL DBA/Developer.

  1. A new startup company started with 5 employees, after an year no of employees grows to 100, subsequently 3-5 employees leaving in every month. Now DBA/Developer needs to get report of all employees those either worked OR currently working in organization. The other requirement is to get report to know all employees for those ‘first name’/’last name’ got change over the time.
  2. A product based company keep introducing new products and retires the old products from their list in regular interval. Now DBA needs to know the all products those were present in one specific old date.

All above situation can be done using complex front end code OR making lots of different tables for generating such reports. Now SQL 2016 comes up with new feature called “Temporal Table” that can help achieving above requirement with minimal code change. In this blog, I am going to show you how this can be done. Please note Temporal table in NOT same as Temporarily table.

Temporal Tables is “A system-versioned temporal table is designed to keep a full history of data changes and allow easy point in time analysis”

Here is list of prerequisite for temporal tables

  1. A primary key must be defined in the table.
  2. Two columns must be defined to record the start and end date with a data type of ‘datetime2’, these two columns can hide using ‘Hidden’ keyword.

Let’s now move to SQL 2016 LAB to see how this feature works.

USE [master]
CREATE DATABASE TemporalTableDemo
GO 

USE [TemporalTableDemo]
CREATE TABLE EmployeeData 
(
EMPID INT PRIMARY KEY,  ---Primary key is required
Firstname  VARCHAR(50),
LastName  VARCHAR(50),
City VARCHAR(50), 
/*Period columns needed for temporal tables*/
ValidFrom DATETIME2 GENERATED ALWAYS AS ROW START HIDDEN NOT NULL,
ValidTo DATETIME2 GENERATED ALWAYS AS ROW END HIDDEN NOT NULL,
PERIOD FOR SYSTEM_TIME (ValidFrom,ValidTo)
)
/*The system-versioning configuration needed for temporal If we don’t specify the history table then system will create automatically dbo.MSSQL_TemporalHistoryFor_XXX
*/
WITH (SYSTEM_VERSIONING = ON (History_Table=[dbo].EmployeeDataHistory)
GO

This is how Temporal table schema should look like.

TableStruture.jpg

Now let’s insert 8 rows.

USE [TemporalTableDemo]
INSERT INTO EmployeeData (EMPID, FIRSTNAME, LASTNAME, CITY) VALUES
              (1, 'Employee1', 'Lastname1', 'city1'), 
              (2, 'Employee2', 'Lastname2', 'city2'), 
              (3, 'Employee3', 'Lastname3', 'city3'),
              (4, 'Employee4', 'Lastname4', 'city4'),
              (5, 'Employee5', 'Lastname5', 'city5'), 
              (6, 'Employee6', 'Lastname6', 'city6'),
              (7, 'Employee7', 'Lastname7', 'city7'), 
              (8, 'Employee8', 'Lastname8', 'city8')
GO

Now let’s retrieve all the rows to confirm the records are inserted.

/*Select all employees data */
USE [TemporalTableDemo]
SELECT * FROM EmployeeData
GO 

/*Above select statement will not return all columns due to hidden keyword in table definition, however you can manually give columns.
*/
USE [TemporalTableDemo]
SELECT EMPID,Firstname, LastName, City, ValidFrom, ValidTo FROM EmployeeData
GO

Let assume few employees left over the time, new employees joined & some of employees details such as first name, last name OR city got changed.

USE [TemporalTableDemo]
DELETE FROM EmployeeData WHERE Firstname IN ('Employee1', 'Employee2')  
UPDATE EmployeeData SET LastName = 'ChangedLastName3' WHERE Firstname = 'Employee3'
UPDATE EmployeeData SET LastName = 'ChangedLastName4' WHERE Firstname = 'Employee4' 
INSERT INTO EmployeeData  (EMPID, FIRSTNAME, LASTNAME, CITY) VALUES
              (9, 'Employee9', 'LastName9', 'City9'), 
              (10, 'Employee10', 'LastName10','City10') 
GO

The reference of every update OR delete record is inserted in history table and DBA can query historical data to do point in time analysis.

Example 1 All current employees.

USE [TemporalTableDemo]
SELECT  EMPID, Firstname, LastName, City FROM EmployeeData
GO

Example 2 All current employees for whom details such as first name, last name OR city got changed over time.

USE [TemporalTableDemo]
SELECT  
EmployeeData.EMPID, EmployeeDataHistory.Firstname, EmployeeDataHistory.LastName, EmployeeDataHistory.City, EmployeeDataHistory.ValidTo as UpdatedON 
FROM EmployeeData
INNER JOIN EmployeeDataHistory on EmployeeDataHistory.EMPID=EmployeeData.EMPID
GO

Example 3 All Employees those left company.

USE [TemporalTableDemo]
SELECT  
EmployeeData.EMPID, EmployeeDataHistory.Firstname, EmployeeDataHistory.LastName, EmployeeDataHistory.City, EmployeeDataHistory.ValidTo as UpdatedON 
FROM EmployeeData
INNER JOIN EmployeeDataHistory on EmployeeDataHistory.EMPID=EmployeeData.EMPID
GO

Example 4 The temporal table can also be queried with a new clause ‘FOR SYSTEM TIME’ such as example below. More details

USE [TemporalTableDemo]
SELECT EMPID,Firstname, LastName, City FROM EmployeeData
FOR SYSTEM_TIME AS OF '2017-03-03 09:52:07.5372573'
--OR use between
--FOR SYSTEM_TIME BETWEEN '2017-03-03 09:52:07.5372573' AND
'2017-03-03 09:59:01.5704154'
GO

Limitation with temporal tables.

  1. Temporal and history table cannot be FILETABLE.
  2. The history table cannot have any constraints.
  3. INSERT and UPDATE statements cannot reference the SYSTEM_TIME period columns.
  4. Data in the history table cannot be modified.

Here is the example of power-shell script that generate HTML report for all above queries for better understanding.

#Script start,  
#Change FilePath & SQL server name 
$SQLServer='LABSQL'
$FilePath='C:\temp\CurrentEmployee.HTML'
$connectionString = "Data Source=$SQLServer;Integrated Security=true;Initial Catalog=TemporalTableDemo;"
$connection= New-Object System.Data.SqlClient.SqlConnection
$connection.ConnectionString = $connectionString
$connection.Open()
$query1= “SELECT  EMPID, Firstname, LastName, City FROM EmployeeData”
$query2= “SELECT  EmployeeData.EMPID,EmployeeDataHistory.Firstname, EmployeeDataHistory.LastName,
EmployeeDataHistory.City, EmployeeDataHistory.ValidTo as UpdatedON 
FROM EmployeeData INNER JOIN EmployeeDataHistory on EmployeeDataHistory.EMPID=EmployeeData.EMPID”
$query3= “SELECT EmployeeDataHistory.EMPID, EmployeeDataHistory.Firstname, EmployeeDataHistory.LastName, EmployeeDataHistory.City, EmployeeDataHistory.ValidTo as TerminatedOn 
FROM EmployeeDataHistory LEFT JOIN EmployeeData ON EmployeeData.EMPID = EmployeeDataHistory.EMPID
WHERE EmployeeData.EMPID IS NULL"
$command1= $connection.CreateCommand()
$command1.CommandText =$query1 
$result1= $command1.ExecuteReader()
$table1= new-object "System.Data.DataTable"
$table1.load($result1)
$command2= $connection.CreateCommand()
$command2.CommandText =$query2
$result2= $command2.ExecuteReader()
$table2= new-object "System.Data.DataTable"
$table2.load($result2)
$command3= $connection.CreateCommand()
$command3.CommandText =$query3
$result3= $command3.ExecuteReader()
$table3= new-object "System.Data.DataTable"
$table3.load($result3) 
$Header
= @"
<style>
TABLE {border-width:4px;border-style: solid;border-color: black;border-collapse: collapse;}
TH {border-width:1px;padding: 3px;border-style: solid;border-color: black;background-color:#FE642E;}
TD {border-width: 1px;padding:3px;border-style: solid;border-color: black;}
</style>
"@
$frag1= $table1| SELECT EMPID, Firstname, LastName, City | ConvertTo-HTML -PreContent '<H2> Details of current Employees</H2>' |Out-String
$frag2= $table2| SELECT EMPID, Firstname, LastName, City, UpdatedON |ConvertTo-HTML -PreContent '<H2> Updated Employees</H2>' |Out-String
$frag3= $table3| SELECT EMPID, Firstname, LastName, City, TerminatedOn |ConvertTo-HTML -PreContent '<H2> All Terminated Employees </H2>'|Out-String
ConvertTo-Html -Head $Header -PostContent $frag1,$frag2,$frag3 -PreContent '<h1>Employees Report<h1>' |Out-File $FilePath
$connection.Close()
#Script Ends, The output should look like below. 
HTML report.jpg

#pointintimeanalysis, #sql2016, #sqlaudit, #temporaltable

SQL 2016 Live Query Statistics

In my last post we learned how ‘Query Store’ works, now this is the time to show one other amazing feature called ‘Live Query Statistic’ that is also released with SQL 2016. Just to clarify this feature is included with SQL Management studio 2016 that means it will work if it connects to SQL version 2014.

Let’s now see how this feature can be beneficial for DBA/Consultant OR support engineers.

The steps of preparing LAB environment is same as steps mentioned in ‘Query store’ Blog.

/* Please note, the database and tables are design only for training purpose so it doesn't necessarily have the best design.*/
-- Create Database & two tables 'Region' and 'EmployeeData'
USE [master]
CREATE DATABASE QueryStoreDemo
GO

USE [QueryStoreDemo]
CREATE TABLE Region
       (
       ID INT IDENTITY(1,1),
       RNAME VARCHAR (100),
       HQ VARCHAR(100)
       )
GO

USE [QueryStoreDemo]
CREATE TABLE EmployeeCity
       (
       ID INT IDENTITY (1, 1),
       NAME VARCHAR (100),
       City VARCHAR(100), 
       RegionID INT
       )
GO

-- Insert values, the second insertion might take time (3-5 minutes) as we are inserting 6 rows 50K times. 
USE [QueryStoreDemo]
INSERT INTO Region  VALUES
       ('PUNE', 'SOUTH'),
       ('GOA', 'EAST'),
       ('NASHIK', 'WEST'),
       ('NASIK','NORTH'),
       ('SOLAPUR','Center')
GO

USE [QueryStoreDemo]
INSERT INTO EmployeeCity
VALUES
       ('Pradeep', 'Pune', 1),  
       ('Vikram', 'Mumbai', 2),
       ('Darshan', 'Goa', 3),
       ('AJAY', 'Nashik', 4),
       ('Yogesh', 'Solapur', 5)
GO 50000

Continue reading

#live-query-statistics, #query-plan, #sql, #sql2016

SQL 2016 Query Store

One of amazing new feature that came up with SQL 2016 release is ‘Query store’. This is going to be the game changer and most useful for SQL DBA, Support Engineers and consultant those used to rely on ‘plan cache’ (AKS ‘Procedure Cache’) to track down query statistics and execution plans for queries run in past. Plan cache is useful to know what got change after XYZ changes such as missing index, application upgrade…etc.

The biggest disadvantage with ‘planned cache’ is that it remain in memory and get flush out once SQL service restart (OR sever reboot). High memory pressure on SQL Instance also cause plan cache to flush out from memory.

Now with SQL 2016, if ‘Query store’ is enabled, execution plans are saved on disk and can be accessed even if server is rebooted. This new feature is enabled on an individual database with minimum configuration change. The saved execution plan can be accessed using GUI reports OR built in DMV queries.

Continue reading

#plancache, #query-performance, #querystore, #sql

SQL 2016 on Linux OS

Since SQL version 2016, customer can decide platform other than windows OS. SQL 2016 can be installed and configured on Linux OS and it’s completely supported. In my opinion the decision of allowing Linux platform is to compete with open source database engine such as MySQL, PostgreSQL and ORACLE those provide similar capabilities. The licensing & editions are still not clear but the platform provide most of features a DBA will look for.

Supported Linux Operating System

Red Hat Enterprise Linux 7.3 Workstation, Server, and Desktop
SUSE Enterprise Linux Server v12 SP2
Ubuntu 16.04LTS and 16.10
Docker Engine 1.8+ on Windows, Mac, or Linux
Operating system should minimum 3.25 GB of RAM and it’s tested up to 1 TB.

UN-Supported Features

Replication/mirroring
Poly-base, stretched DB
System extended stored procedures (XP_CMDShell, etc.)
Windows/AD authentication
User certificate SSL/TLS
Agent service / Brower / Reporting service
More Details

Installation demo of SQL on Ubuntu 16.04 Server

Configure prerequisite

Step 1 Download Ubuntu server ISO

Step 2 Create a new Linux OS in your virtual environment. You should have atleast 4 GB RAM &  20 GB disk space (Recommend  Test environment). Supply ‘User Name’ & ‘Password’ that will be used to login in terminal (command prompt of Linux), start the installation.

Step 3. Installation will continue without any user intervention.

Step 4. Once installed successfully, supply user name / password (Refer Step 2) to login in terminal.

Step 5. Get yourself familiarize with Linux Ubuntu commands.
Get the OS details.
IP address of machine
Hostname of machine
Check connectivity with internet using ‘ping’

lsb_release –a
ifconfig
hostname
ping help.ubuntu.com

If you wish to change IP address then refer

Step 6. Install the Open SSH client applications on Ubuntu server so  you will be able to take terminal access remotely using putty  and it will help to insert commands easily.

sudo apt-get install openssh-server openssh-client

 

(All above steps are shown in following slide show)

This slideshow requires JavaScript.

Installation & configuration of SQL Server

Step 1 Ensure your system is fully updated before installing SQL.

sudo apt-get update

Step 2. Install curl (curl is a tool to transfer data from OR to a server using supported protocol (http,imap, ldap…etc)

sudo su
sudo apt-get install curl

STEP 3 Add ‘SQL Server’ information to the repository on ‘/etc/apt/sources.list.d/’

curl https://packages.microsoft.com/config/ubuntu/16.04/mssql-server.list >/etc/apt/sources.list.d/mssql-server.list
curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add

STEP 4 Install SQL server.

sudo apt-get install mssql-server

STEP 5 Run the configuration script to accept the license agreement and provide the System Administrator (SA) password.

sudo /opt/mssql/bin/mssql-conf setup

STEP 6 SQL service will be enable/start by default. If it does not then enable and start the service. Also verify if SQL is listening on port 1433

sudo systemctl enable mssql-server
sudo systemctl start mssql-server
sudo netstat -peant | grep ":1433"

STEP 7 The SQL server is ready, DBA now can access SQL server using ADO.NET client OR SQL Management Studio installed on Windows machine for Admin purpose.

(All above steps are shown in following slide show)

This slideshow requires JavaScript.

Installation of SQL Server command-line tools (SQLCMD)

Step1 Download SQLCMD tool and ODBC Drivers..

curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list>/etc/apt/sources.list.d/ms-prod.list
sudo apt-get update

Step 2 Install SQLCMD tool and ODBC Drivers.

sudo apt-get install mssql-tools

Accept license agreement.
Step 3 Create symlink for SQLCMD and

ln -sfn /opt/mssql-tools/bin/sqlcmd /usr/bin/sqlcmd

Step 4 Connect SQL server using SQLCMD then create database, table & inserts few rows.

sqlcmd -S 192.168.132.132 -U SA -P Password123#

Use [master]
create database LinuxDBTest
go
use LinuxDBTest
create table t1 (c1 int, c2 varchar(100))
go
insert into t1 values (1, 'pradeep')
go
select * from t1

Step 5 Verify DB, Tables & row using SSMS.

This slideshow requires JavaScript.

#linux, #sqlonlinux