0% found this document useful (0 votes)
150 views45 pages

SQLServerGeeks Magazine June 2021

https://sqlservergeeks.com/resources/magazine/SSG_Magazine_June_2021.pdf

Uploaded by

rexpan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
150 views45 pages

SQLServerGeeks Magazine June 2021

https://sqlservergeeks.com/resources/magazine/SSG_Magazine_June_2021.pdf

Uploaded by

rexpan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Our Social Channels

Website

Magazine Linkedin

We are thrilled to present to you, the second edition of the


SQLServerGeeks Magazine, June 2021. Telegram

We always wanted to expand our offerings to the SQL Community,


evolve, and become better and what we do, and with the magazine, Youtube
we have upgraded ourselves. The magazine has long been a dream,
which came to reality last month with the first release. Download
May Edition 2021. Twitter
In all honesty, the journey to this point was riddled with road-bumps
and potholes of all sorts. As we started conceptualizing the first
edition of the magazine, we were hit with the second wave of the Facebook
pandemic quite hard affecting the families of our team members
unsettling all of us. Things seemed grey and bleak. Nonetheless, we
kept your heads high and spirits higher as we gave it our undivided COPYRIGHT STATEMENT
best to bring you this edition, undoubtedly the first of many, as a Copyright 2021. SQLServerGeeks.com. c/o
reminder of the fact that perseverance does pay off. eDominer Systems Pvt. Ltd. All rights
reserved. No part of this magazine may be
reproduced or transmitted in any form or by
A massive shoutout to all authors -Ben Miller, Erland Sommarskog, any means, electronic or mechanical,
Martin Catherall, Prince Rastogi, Satya Ramesh, Amit Khandelwal, including photocopying and recording, or by
and Denisha Malone -who agreed to contribute on such short notice any information storage and retrieval
system, without permission in writing from
–hats off to them. We are truly humbled to see that the #SQLFamily the publisher. Articles contained in the
has got our back and with absolute certainty, we can say that we got magazine are copyright of the respective
theirs! authors. All product names, logos,
trademarks, and brands are the property of
their respective owners. For any
While we celebrate our magazine success, we are equally & deeply clarification, write to
saddened to lose a few #SQLFamily members in the last few weeks. [email protected].
We lost Gareth Swanepoel and Ahmad Osama – true gems in their
own way. They were kind-hearted, humble and always willing to
CORPORATE ADDRESS
help others. Please consider supporting their families. Gareth
fundraiser. Ahmad fundraiser. Bangalore Office:
686, 6 A Cross,
We here at SQLServerGeeks believe everything can be fine-tuned 3rd Block Koramangala,
and optimized. It is no different with our magazine. Make sure to Bangalore – 560034
give us your feedback so we can continue to provide quality content,
well-curated to your interests. Write to us at
[email protected]
Kolkata Offices:
Just as a magician never reveals all his tricks at once, we too have a Office 1:
few cards up our sleeves. So, make sure to stay tuned for a spectacle eDominer Systems Pvt. Ltd.
in the days to come. We are just getting started. Make sure you help The Chambers
us spread the word. Ask your friends and colleagues to subscribe to Office Unit 206 (Second Floor)
the magazine. 1865 Rajdanga Main Road
(Kasba)
From all of us at SQLServerGeeks, we wish you a pleasant read. Kolkata 700107
Happy Learning.
Yours Sincerely Office 2:
SQLServerGeeks Team 304, PS Continental,
83/2/1, Topsia Road (South),
Got from a friend? Subscribe now to get your copy. Kolkata 700046
Back to TOC | Page 2
TABLE OF CONTENTS
04 a 08 a 10 a 13 a

Using PowerShell as #DPS2021 Lucky Take Action for Did You Know?
DBA Draw Equity, Diversity &
Inclusion

14 a 19 a 21 a 27 a

A Tip to Optimise SQL Nuggets by Data Discovery and DPS 2021


LIKE Searches Microsoft Classification Announcement

29 a 33 a 34 a 38 a

Investigate CPU Learning Frequently Asked #SQLFamily


Usage in SQL Server Opportunities Questions for SQL Beyond SQL
Server on
Linux/Containers

40 a 43 a 44 a

SQL Server and SQLServerGeeks #DPS2020 Free


JSON Virtual Symposium Content Every Day
Recordings – SQL
Server & Azure SQL

Back to TOC | Page 3


Using PowerShell
as a DBA
Ben Miller | @DBAduck

Initial Thoughts
Working as a DBA you are most often faced with multiple instances and multiple databases. Your job
is to ensure that the database servers stay running and the databases are accessible. The tools that
are in the trade of DBA consist of SQL Server Management Studio (SSMS), Azure Data Studio (ADS) or
if you are a command line person, you could use sqlcmd at times. If you are new to being a DBA,
PowerShell is becoming a standard tool as well. Some of us old ones are embracing PowerShell as a
key tool in our toolbelts. Now is the time to ensure that you have the right tools and know how to use
them. This article will illustrate some of the reasons and ways to use PowerShell as a DBA to make life
just a little more enjoyable and your work a little more scalable.

PowerShell Introduction
When you are thinking of PowerShell, what is the first thing you think of? Windows Administration.
Another language you need to learn, or are you thinking, I am glad this tool can help me be a DBA? I
can tell you if the first 2 are how you think, it may be a challenge to get out of that mindset. But if you
are thinking of the 3rd item, you may have a chance to get things moving in the right direction.
PowerShell is a tool that Microsoft created for Windows Administration first, and we can use it as
DBAs because it is built upon the .NET Framework which means that anything in .NET, we can use in
PowerShell. In fact, SMO (Shared Management Objects) are built using .NET so the library and all her
objects can be used in PowerShell to help administer the databases and their servers. Think of all the
things you can do with SMO that will help you get more for your time. Think of the projects out there
that leverage this set of libraries to do work for us, to give us a command instead of writing lots of
code when we want to do something. DBAtools, an opensource module, is one of those tools we use
as DBAs, as well as the SqlServer module from Microsoft. Both have a foundation in SMO and give you
a command-based toolset that will let you get information as well as change objects in the databases
and servers.

Database Administration Concepts


There are concepts as a DBA that are fundamental to our daily work. Some of them include:

• Growing Data or Log Files


• Changing Settings on Databases

These are only a few that are core to our work, but there are many more as well. Each will be addressed
and some others may be introduced as well. Take a look at each section below to see how these

Using Powershell as a DBA (Page 1 of 4) Back to TOC | Page 4


concepts are addressed using PowerShell as the tool. They may be combined with DBAtools to
underscore the point.

Some Background First on SMO


In PowerShell you get some default objects that are loaded when PowerShell starts. SMO is not one
of them. The trick today to loading SMO is to use a Module to do it. It used to be that when you
installed SSMS it installed SMO in the GAC (Global Assembly Cache c:\windows\assembly), but now it
does not, it is all embedded in the install and gets put in the folder that SSMS uses, therefore is not
necessarily as easy as
[System.Assembly.Reflection]::LoadWithPartialName(“Microsoft.SqlServer.Smo”

). You now have to have the assembly where you can reference it. So you can either reference the
assemblies in the SSMS space (C:\Program Files (x86)\Microsoft SQL Server Management Studio
18\Common7\IDE) and do:
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio 18\Common7\IDE\Microsoft.SqlServer.Smo.dll”
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio
18\Common7\IDE\Microsoft.SqlServer.Smo.Extended.dll”
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio 18\Common7\IDE\Microsoft.SqlServer.SqlEnum.dll”

and any others in that directory that you may need.

The other way is to use the modern modules DBAtools or SqlServer. If you don’t have these installed
and you have PowerShell 5.1 installed (or are on Windows 10, because 5.1 is native there) you can do
the following:
PS:>Install-Module SqlServer -AllowClobber
PS:>Install-Module dbatools

Note: The reason you would specify -AllowClobber is if you have SQL Server installed on that server
because it has SQLPS module that it installs for compatibility with SQLAgent for now.

Or if you already have them installed and want to update them (if you have both installed, otherwise
just specify the one you have installed):
PS:>Update-Module SqlServer, dbatools

These modules have the SMO libraries embedded and will side load them when the module is
imported using Import-Module dbatools. Or the same thing with the SqlServer module

Growing Data or Log Files


A database is made up of 1 or more Data Files and at least 1 log file and hopefully only 1 log file. There
is a setting on a database to allow your files to auto grow, but we really don’t want that to happen if
we can help it. This is how it is done a couple of different ways. The first way will be in SMO and the
second will be via DBAtools where there are commands available to accomplish it.

Scenario 1
I have a database named Database1 on SqlServer1 and it has 2 files, 1 is a data file with the internal
name being “Database1_Data” and a log file with the internal name “Database1_log”. I need to grow
the Data File by 1GB and I need to grow the Log file by 512MB.

Using PowerShell as a DBA (Page 2 of 4) Back to TOC | Page 5


SMO Solution
First you will need to ensure that you follow the Background on SMO first. Then you will proceed with
creating objects. You will see that using a module like dbatools or SqlServer will be of great help, since
they can return SMO objects to you that save you some steps. But for now I will show you the SMO
way (each command should be on its own line.
PS:> $server = New-Object -TypeName
Microsoft.SqlServer.Management.Smo.Server -ArgumentList SqlServer1

PS:> $db = $server.Databases[“Database1”]


PS:> $file = $db.Filegroups[“PRIMARY”].Files[“Database1_Data”]
PS:> $file.Size += (1MB)

PowerShell Module Solution


We are going to use the dbatools module to accomplish the same thing. One thing to note is that
dbatools does not have a command to expand a data file, just a log file, but we will use a combination
to expand a data file. (Look closely and you will see it is pretty close to the SMO version)
PS:> Import-Module dbatools
PS:> $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS:> $file = $db.Filegroups[“PRIMARY”].Files[“Database1_Data”]
PS:> $file.Size += (1MB)

And the Log File gets better, except it is harder in this case. The Expand-DbaDbLogFile expects a
TargetLogSize instead of just an increment that you want to add to it, so you have to go through more
steps.
PS:> Import-Module dbatools
PS:> $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS:> Expand-DbaDbLogFile -SqlInstance SqlServer1 -Database Database1
-TargetLogSize ($db.LogFiles[0].Size+(512KB)) -Confirm:$false

Changing Database Settings


Setting some database properties can be done simply as well. The two properties are Recovery Model
and the Compatibility Level. There are many more properties, but I will focus on these two. Recovery
Model will change to “FULL” and the Compatibility Level will change to 140 (SQL 2017).

SMO Solution
This will look familiar in SMO as these types of changes basically stay the same.
PS:> $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS:> $db.CompatibilityLevel = “Version140”
PS:> $db.Alter()

PowerShell Module Solution


I will use the dbatools module for this solution as well.
PS:> Import-Module dbatools
PS:> Set-DbaDbRecoveryModel -SqlInstance SqlServer1 -Database
Database1 -Compatibility Version140

Using PowerShell as a DBA (Page 3 of 4) Back to TOC | Page 6


SMO Solution
Let us change the Recovery Model to FULL now for the database.
PS: > $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS: > $db.RecoveryModel = “FULL”
PS: > $db.Alter()

PowerShell Module Solution


This solution will use the dbatools module.
PS: > Set-DbaDbRecoveryModel -SqlInstance SqlServer1-Database
Database1 -RecoveryModel FULL

Summary
Performing your DBA duties can be done a few ways, even using SSMS, but the point of this article is
to show that you can use PowerShell to change database settings or even using modules to make them
simpler. PowerShell can do much more than you see here, but this should be a good taste of what you
can do. Join me in the quest to become a PowerShell DBA.

Questions? Comments? Talk to the author today. Ben Miller on Twitter.

About Ben Miller


Ben has been a member of the SQL Server Community since 2000. He loves a
challenge and has fixed many SQL Servers and helped hundreds of people get
more out of their DBA jobs.

LEARN MORE

Non-Tech Ben Miller


During his free time he loves to bowl and play golf and has even bowled a perfect 300 game.

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

Using PowerShell as a DBA (Page 4 of 4) Back to TOC | Page 7


#DPS2021
Lucky Draw
This is me again, DaDa, your
companion, guide and the official
mascot for Data Platform Virtual
Summit 2021 - happening in the
month of September 2021.
Seeing how lively the SQL community is, we decided to add an interactive aspect to pre -
event activities in order to liven things up a bit with - The DPS 2021 Lucky Draw!

This is your only chance to win a full-access Summit pass - which includes all Summit
features such as Breakouts, Data + AI Gurukul, Technical Round Tables, AMA Sessions,
Panel Discussions, Community Zone & much more!
The Lucky Draw works on a SQL algorithm, which is responsible for selecting t he winners
– every week! Yes, each week we are announcing 'multiple' winners. While registering
for the Lucky Draw, you need to provide your information just once. On the off chance
that you are not selected as the winner in a specific week, your participa tion will be
automatically forwarded to the draws taking place in the following week(s). It is
important to note that the Lucky Draw and the resulting free pass is exclusively for the
LIVE attendees of the Summit only . In case you have already booked the S ummit ticket
and are also chosen as the winner, then a full refund will be provided by the DPS Team.
So worry not, if your luck is running low at the moment, we made sure that the odds
are still stacked in your favour! On that note, consider giving the Lad y Luck a chance
and submit your participation at the earliest by visiting -
DPS 2021 LUCKY DRAW

All the Best!

Participate Now Training Classes DPS Home

Back to TOC | Page 8


{Place your company ad here and reach out to our readers}
{Talk to us today. Drop an email at [email protected]}

Got from a friend? Subscribe now to get your copy

Back to TOC | Page 9


Take Action for Diversity,
Equity, & Inclusion
DeNisha Malone | @thepowerbiqueen

E very day I live to make the world better. My community is important, and the people I identify
with depend on me to lead with them in mind. Being a black American female and the first
member to graduate college in my immediate family, I have awaited this time when diversity,
equity, and inclusion are being heard and talked about more often in workplaces influenced by white
America. This article breaks down the meaning of each word and provides action steps that you, as an
individual, can do to change your professional environment and support the movement for a better
society.

Diversity:
Your organization is diverse when it includes people from different social/ethnic backgrounds, sexual
orientations, genders, and people with disabilities. A diverse working environment will have a greater
variety of thoughts and a superior blend of foundations and experience. This prompts better thoughts
and procedures for making progress in business and projects. Being with individuals of different
backgrounds with various life encounters can produce thoughts or points of view that others might
not have considered or thought about. Everybody has their way of approaching an issue - thoughts
are shaped by an individual’s experiences and the perspective they see. Rather than everybody
contributing similar considerations and solutions, a diverse interpretation and approach can lead to
creativity and innovation.

Here are actions to activate diversity:


1. Connecting with diverse industry professionals when you have or know of open
positions.
2. Interviewing candidates with non-traditional credentials.
3. Creating an employee referral program to encourage a more diverse talent pool.
4. Having employee resource groups consisting of employees who share mutual traits,
backgrounds, or interests.

Equity:
People who do not face challenges because of their identity have a head start on the ones who do.
This should not imply that the person who is behind cannot compensate for any shortcomings - in
most cases, people overcome even though the chances may not be favourable based on societal
norms.

Take Action for Diversity, Equity, & Inclusion (Page 1 of 2) Back to TOC | Page 10
Equity is ensuring that the challenges are not stacked, to begin with - it is to make changes so
everybody has a similar chance to succeed. To have equity in your organization, there must be an
understanding of what everyone needs and wants to be successful in that work environment. When
you give everyone the same thing, it’s known as equality; and equality could lead to boredom and job
switching. Try elevating equity and give everyone what they truly need to be happy and productive at
work.

Here are actions to elevate equity:


1. Working with more minority and women-owned business vendors and consultants.
2. Mentoring and providing professional development opportunities.
3. Modifying screening questions to focus more on potential and ability and less on
specific criteria such as education and previous positions.
4. Considering equity-driven analysis when working with data; this could mean removing
bias and stereotypes like race and gender columns.

Inclusion:
Inclusion births acceptance. We are at our best when we are accepted as ourselves. To be your
genuine self, a person must feel included. Inclusion recognizes how much people can use their voice,
make decisions in a group, and move up in leadership roles in an organization. When a person is
included, they have a feeling of belonging that drives results as they begin to collaborate in teams and
provide new ideas.

Here are actions to lead inclusively:


1. Including others’ perspectives to promote full participation and the sense of belonging
of everyone.
2. Supporting inclusive leadership across the organization by motivating diverse people
and women to move into leadership positions.
3. Embracing employees to be their full authentic selves.
4. Expanding the company’s calendar to recognize diverse holidays, for example,
Ramadan for Muslims; Diwali for Hindus, Jains, Sikhs, and Newar Buddhists; and
Juneteenth for African Americans on June 19th.

Questions? Comments? Talk to the author today. DeNisha Malone on Twitter.

About DeNisha Malone


DeNisha Malone is a business intelligence solutions architect and international
speaker for the data platform industry.

LEARN MORE

Non-Tech World of DeNisha Malone

When DeNisha is not working, she enjoys mentoring college students and traveling the world to
experience new things.

Take Action for Diversity, Equity, & Inclusion (Page 2 of 2) Back to TOC | Page 11
Ad

Back to TOC | Page 12


Did you know?

The code name for In-Memory


OLTP feature of SQL Server
was Hekaton. Project Hekaton
Azure SQL Data
was conceptualized in
Warehouse is now Azure
collaboration with Microsoft
Synapse Analytics.
Research. Learn More
Learn More

One of the longest SQL blog


series was written by In the year 2015 & 2016,
Manohar Punna. The series Data Platform Summit was
was called "One DMV a known as SQLServerGeeks
Day". 80 days non-stop, Annual Summit (SSGAS). In
Manohar covered 80 DMVs 2017, SSGAS was renamed
with examples & code as DPS (Data Platform
snippets. Learn More Summit). Learn More

Back to TOC | Page 13


A Tip to Optimise
LIKE Searches
Erland Sommarskog

C
onsider this query:
SELECT * FROM tbl WHERE col LIKE '%alter%'

The table has some size and the text in the column is long and the query is slow. What
options are there to speed it up?

The best option in many situations is to use full-text search, which is a feature that ships with SQL
Server (although, at Setup time it is optional to install full-text support). With a full-text index in place
the query can be written as:
SELECT * FROM tbl WHERE CONTAINS(col, 'alter')

And the response time will be really good.

Unfortunately, full-text is not an option if users want to find alter also if it is in the middle of a word
such as inalterable. This is because full-text builds an index on words. Likewise, full-text is not an
option, if users want include punctuation characters in their search strings, because full-text strips
those out.

There is a second option to build a fast index-based solution, by using something known as n-grams,
but there is no built-in support for this in SQL Server and it is very heavy artillery. Thus, in most cases
you will have to let it suffice with LIKE and wildcard patterns. You may have understood that one
reason LIKE searches with leading wildcards are slow is that no index can be used, so they result in a
scan.

However, there is a second reason why these searches are slow, and I like to share a tip on how you
easily can factor out that part to speed things up a little bit.

To understand this second reason, we start by creating this somewhat non-sensical table:
CREATE TABLE guids (

ident bigint NOT NULL IDENTITY,

Windows_nvarchar nvarchar(200)

COLLATE Latin1_General_100_CI_AS NOT NULL,

SQL_nvarchar nvarchar(200)

A Tip to Optimise Like Searches (Page 1 of 5) Back to TOC | Page 14


COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,

Windows_varchar varchar(200)

COLLATE Latin1_General__100_CI_AS NOT NULL,

SQL_varchar varchar(200)

COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL

You may note that there is one pair of nvarchar columns and one pair of varchar columns, likewise
there is one pair of columns with a Windows collation and one pair of columns with an SQL collation.

Next, we fill it up with some data:

INSERT guids (Windows_nvarchar, SQL_nvarchar, Windows_varchar, SQL_varchar)

SELECT TOP (1000000)

concat_ws(' ', newid(), newid(), newid(), newid()),

concat_ws(' ', newid(), newid(), newid(), newid()),

concat_ws(' ', newid(), newid(), newid(), newid()),

concat_ws(' ', newid(), newid(), newid(), newid())

FROM sys.columns a

CROSS JOIN sys.columns b

This results in a lot of strings that are 147 characters long.

Let’s now run a LIKE search on the Windows_nvarchar column:

DECLARE @d datetime2(3) = sysdatetime(),

@searchstr varchar(50) = '%abc%',

@cnt int

SELECT @cnt = COUNT(*) FROM guids WHERE Windows_nvarchar LIKE @searchstr

PRINT concat('Windows_nvarchar: ', datediff(ms, @d, sysdatetime()), ' ms. ',

'Count: ', @cnt)

The output I got was:


Windows_nvarchar: 4422 ms. Count: 23070

When I run the same search for the columns SQL_nvarchar and Windows_varchar, the result is
similar:
SQL_nvarchar: 3752 ms. Count: 23239

Windows_varchar: 4328 ms. Count: 23073

A Tip to Optimise Like Searches (Page 2 of 5) Back to TOC | Page 15


But when I try the last column, the result is strikingly different:
SQL_varchar: 1062 ms. Count: 22921

Compared to the original search, this is a speed-up with a factor of four.

Why does this happen? With a LIKE search with a leading wildcard, SQL Server does not only have to
scan the index; it also has to scan the strings. For a column which does not include the search string,
SQL Server has to scan the string from start almost to the end to see if there is a character that is equal
to a according to the current collation. Depending on the type of collation, that could be an A, á or Å.
Then again, if the collation is accent-sensitive, an a followed by a combining acute accent is not a
match, because logically that is the same as á. You may not have heard of combining accents before,
but they are a feature of Unicode, and applying the full rules of Unicode is quite complex, and when
SQL Server has to do this for every character, it starts to add up.

This obviously happens if you have nvarchar, since that is Unicode. But it also happens if you have a
Windows collation and varchar, because with a Windows collation all operations are performed in
Unicode also for varchar.

With an SQL collation and varchar, this is different. The definition of varchar in SQL collations goes
back to SQL 6.5 and earlier when SQL Server did not support Unicode. For this reason, SQL collations
have a completely different library with its own set of rules. And particularly, this library only has to
bother about the 255 characters in the code page that the SQL collation supports for varchar and
therefore character comparison is a much simpler and faster business.

So is the lesson that we should use SQL collations with varchar for LIKE searches? Not really. If you
design a new application, there is no reason not to design it for full international support to be future-
proof. This means that you should use nvarchar, or, if you are on SQL 2019 or Azure, use varchar with
a UTF8 collation. But varchar with an SQL collation is a poor choice.

However, there is a second way to avoid the complex Unicode rules: cast the search column to a binary
collation:

DECLARE @d datetime2(3) = sysdatetime(),

@searchstr varchar(50) = '%abc%',

@cnt int

SELECT @cnt = COUNT(*) FROM guids

WHERE Windows_nvarchar COLLATE Latin1_General_100_BIN2 LIKE @searchstr

PRINT concat('Binary collation: ', datediff(ms, @d, sysdatetime()), ' ms. ',

'Count: ', @cnt)

This is even faster that the SQL collation:


Binary collation: 859 ms. Count: 0

With a binary collation, all that matters is the code point. But this also has the side effect that the
comparison is case-, accent- and everything-else sensitive. You can see that the count is 0, and this is
due to that the search string is lowercase but the column data is all uppercase.

A Tip to Optimise Like Searches (Page 3 of 5) Back to TOC | Page 16


In most applications, users want a case-insensitive search, so this does not seem useful. However, this
is a problem we can overcome with a small modification: we can apply the upper function to both the
column and the search string:
DECLARE @d datetime2(3) = sysdatetime(),

@searchstr varchar(50) = '%abc%',

@cnt int

SELECT @cnt = COUNT(*) FROM guids

WHERE upper(Windows_nvarchar) COLLATE Latin1_General_100_BIN2

LIKE upper(@searchstr)

PRINT concat('Binary collation with upper: ',

datediff(ms, @d, sysdatetime()), ' ms. ',

'Count: ', @cnt)

With this change, I got this output:


Binary collation with upper: 890 ms. Count: 23070

It is almost five times faster than the original search! And notice that this works no matter the column
is varchar or nvarchar. It is still not going be blazingly fast, because there is still an index or table scan.
But at least we have reduced the cost of the string scan – and with very simple measures.

I should add the disclaimer that this is not 100 % faithful to the result you will get with a case-
insensitive collation, but there are situations where you will get different results. However, these cases
are quite much corner cases, and I would suggest that it will be good enough. On the other hand, if
the users also require the search to be accent-insensitive so that resume matches résumé, this trick is
not an option for you.

Closing note: above I mentioned n-grams as a solution to this problem. This is actually something I
have written about, and it is published as a chapter in the book SQL Server MVP Deep Dives, where a
number of us MVPs wrote one or two chapters about our favourite topics. We do not make any money
from this book, as our royalties go to War Child International, a network of independent organisations,
working across the world to help children affected by war. If you want a really fast implementation for
this type of searches, you may want to check out that book. But as I said, it is heavy artillery. By the
way, the same book also includes an excellent introduction to full-text indexes, written by Robert C.
Cain.

A Tip to Optimise Like Searches (Page 4 of 5) Back to TOC | Page 17


Questions? Comments? Talk to the author today. Erland Sommarskog.

About Erland Sommarskog


Erland Sommarskog is an independent consultant based in Stockholm, working
with SQL Server since 1991. He was first awarded SQL Server MVP in 2001, and
has been re-awarded every year since.

LEARN MORE

Non-Tech World of Erland Sommarskog

Erland plays bridge (when there is not a pandemic), enjoys travelling (when there is not a pandemic),
and in the far too short Swedish summer he goes hiking and biking in the forests and around the lakes
around Stockholm.

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

A Tip to Optimise Like Searches (Page 4 of 5) Back to TOC | Page 18


SQL nuggets
By microsoft

Microsoft.Data.SqlClient 2.1.3

Microsoft.Data.SqlClient 3.0 Preview 3

Cumulative Update #24 for SQL Server 2017 RTM

MicroSoft Build Announcements

Announcing Azure SQL Database ledger

Always Encrypted for Azure Cosmos DB (in preview)

Azure Cosmos DB serverless is now generally available


for all APIs

Low-cost and free options for Azure Cosmos DB, Azure


Database for PostgreSQL, and Azure Database for MySQL.

Back to TOC | Page 19


{Place your company ad here and reach out to our readers}
{Talk to us today. Drop an email at [email protected]}

Got from a friend? Subscribe now to get your copy.

Back to TOC | Page 20


Data Discovery
and Classification
Prince Rastogi | @princerastogi2

D ata Security is one of the major important area of any business now days. We know, data size
is growing rapidly in almost every type of business which makes it more important to protect
data. There is various type of features in SQL Server which help us to apply security layers to
protect data for example: Transparent Data Encryption, Backups Encryption, Always Encryption, Row
level Security etc. Before implementing these features, it is also important to identity what type of
data is stored in databases then we can decide what security layer is required to protect that data.
This is the area where Data discovery and classifications comes into picture.

Data Discovery and classification is the process of scanning your data and classify that data by apply
proper labels based on predefined set of patterns, keywords, or rules.

As a Database Administrator/Database Security Professional/Architect, it is also important to keep


good documentation of what type of data is stored in databases and how critical it is. Once you have
all this information then it will be easy for you to decide what type of security control needs to be
placed around that data to achieve compliance. Data Discovery and Classification feature in SQL Server
Management Studio can help us to achieve this. Microsoft Initially launched Data Discovery and
Classification in SQL Server Management Studio 17.5 and this can be used for databases running on
SQL Server 2012 and later.

To use this feature, you can right click on your database in SSMS -> Tasks -> Data Discovery and
Classification.

Data Discovery and Classification (Page 1 of 6) Back to TOC | Page 21


Classify Data:
This option will show you all classified columns in the database (if any). This will also show you some
classification recommendations based on patterns or rules defined to scan columns based on their
names. Below is the screen shot for Classify Data on Adventure Works database:

Here we don’t have any column classified but we have some recommendations. We can get details of
all recommendations by clicking on top or bottom message.

Predefined default Information types are as mention below. Using these information types you can
categorize your data easily. if you want to change Information Type for any column in
recommendation list then you can change that from drop down.

Data Discovery and Classification (Page 2 of 6) Back to TOC | Page 22


Once you have proper category for column then you can place a proper sensitivity label on that.
Predefined sensitivity levels are as mention below. You can change this label using drop down if not
properly identified under recommendation.

Once you are done by accepting/modifying recommendations then click on “Accept selected
recommendations” button.

After Accepting Recommendations, save button will be enabled and click on “Save” to save all these
recommendations.

If there is some column which is not in recommendation list, then can be added manually by clicking
on “Add Classification” option next to “Save”.

Data Discovery and Classification (Page 3 of 6) Back to TOC | Page 23


Generate Report:
This is the second option under Data Discovery and Classification. Using this option report can be
generated which can provide information about – How many columns have been classified out of total
number of columns. How many tables contain sensitive data out of total number of tables etc.

Data Discovery and Classification (Page 4 of 6) Back to TOC | Page 24


Export Information Protection Policy:
Using this option, you can export default protection policy JSON file where all rules are defined. You
can modify any existing labels or can add new labels based on your organization standards.

Same way information types can also be modified or add based on organization standard. Here under
Information type, patterns are defined to provide recommendations. You can modify or add new
patterns. This is something easy to manage.

Data Discovery and Classification (Page 5 of 6) Back to TOC | Page 25


Once all changes are done to this file then you can upload this file back using “Set Information
Protection Policy File..” to overwrite default information protection file. If you want to go back to
default rules then use “Reset Information Protection Policy to default..” under Data Discovery and
Classification.

If you are using SQL Server 2019 then you can use sys.sensitivity_classifications system catalog view
to get classified column details. You can also add new classified column(s) using ADD SENSITIVITY
CLASSIFICATION syntax.

Conclusion:
Data Discovery & Classification in SQL Server Management Studio and SQL Server can play a very
important role to categorize all your data inside databases. This will help in various ways to database
security and policies.

Questions? Comments? Talk to the author today. Prince Rastogi on Twitter.

About Prince Rastogi:


Prince Rastogi is working as Cloud Database Architect with Elephant Insurance
Services LLC in Richmond area.

LEARN MORE

Non-Tech World of Prince

During his free time, Prince loves cooking & watching historical movies.

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

Data Discovery and Classification (Page 6 of 6) Back to TOC | Page 26


v

Attention all Data Enthusiasts! My name


is DaDa. I represent Data Platform Virtual
Summit this year. And the big news is
here. DPS 2021 (#DPS2021) has been
announced.
Well, not just the DPS 2021 announcement,
the bigger news is that DPS 2021 (the Summit)
is now free if you book any one Training Class.
Read on to know more.

I will take you on an exhilarating journey, into


the world of Data, Analytics and AI. Be a part
of the action this September, to learn & grow
your technical skills with some deep technical
content from the world’s best Data
Professionals.

Data Platform Virtual Summit will run from Sep 13 to 18. Pre-Cons on Sep 8 & Sep 9.
Post-Cons on Sep 20 & Sep 21.

Visit DPS 2021 Today CFS is Open

Back to TOC | Page 27


Building on the success from last year, DPS 2021 will be virtual and will run for 54 hours – covering the
entire globe. Brilliant minds, spanning over different continents, will defy geographical boundaries and
come together to pull-off this spectacle.

A 100% technical learning event with 150+ Breakout Sessions, 20+ Training Classes, 100+ World’s Best
Educators & 54 hours of conference sessions , makes DPS 2021 one of the largest online learning

events on Microsoft Azure Data, Analytics & Artificial Intelligence.

Virtual World
of DPS
Last year, Data Platform Summit
transitioned into a virtual event. We
brought you 30+ Training Classes, 200+
Breakout Sessions, 170+ World’s Best
Educators, 48 hours of Pre-Cons, 48
hours of Post-Cons & 72 hours of non-
stop conference sessions – DPS 2020
was the largest online learning event
on Microsoft Azure Data, Analytics &
Artificial Intelligence.
Breakout Session Room
For your comfort, we will cover all time zones,
running continuously. The event is coming to
your country, your city, your home! So, now
being virtual, DPS has a bigger participation from
Microsoft Redmond based Product Teams &
worldwide MVPs.

Now comes the exciting part! Our virtual


conferencing platform. You will be delighted to
experience the incredible interactivity of the
platform – truly immersive!

Round Tables

Data Gurukul Exhibitor Hall


Back to TOC | Page 28
Investigate CPU Usage
In SQL Server
Satya Ramesh | @satyaramesh230

I n this article, we will see a few scripts that help us to find CPU usage using DMVs & DMFs. These
will be quite handy in day-to-day DBA query tuning efforts.

SQL Server Dynamic Management Views (DMVs) & Dynamic Management Functions (DMFs) are
used to investigate the health of SQL Server engine. The metrics & data items produced by them are
very useful in analyzing and fixing performance problems.

Query Level CPU Usage


Often, we want to find out queries that are consuming a lot of CPU, IO & Memory. You can do that
using sys.dm_exec_query_stats DMV. The below script uses sys.dm_exec_query_plan DMV along
with sys.dm_exec_sql_text & sys.dm_exec_query_plan DMFs to get the actual culprit queries and
their respective execution plans.

SELECT TOP 10 est.[text], eqp.query_plan AS SQLStatement,


[execution_count]
,[total_worker_time]/1000 AS [TotalCPUTime_ms]
,[total_elapsed_time]/1000 AS [TotalDuration_ms]
,query_hash
,plan_handle
,[sql_handle]
FROM sys.dm_exec_query_stats eqs
CROSS APPLY sys.dm_exec_query_plan(eqs.plan_handle) eqp
CROSS APPLY sys.dm_exec_sql_text(eqs.sql_handle) AS est
ORDER BY [TotalCPUTime_ms] DESC

In the above query, if we change the column in the ORDER BY clause to total_logical_reads column
then we can find out TOP 10 I/O intensive queries (from a read perspective). If we do a sort on the
total_grant_kb column, then we can find out top queries that are asking for extra memory grant. If
we do a sort on the [TotalDuration_ms] column, then we can extract long-running queries.

Request Level CPU Usage


We can also extract CPU cycles information of each request that is executing in SQL Server by
using sys.dm_exec_requests DMV. In this DMV, we need to look at the cpu_time column, which tells
us the CPU consumption of each request sent to the engine. We can do a CROSS APPLY
with sys.dm_exec_query_plan & sys.dm_exec_sql_text DMFs to get the query plan and the query

Investigate CPU Usage in SQL Server (Page 1 of 3) Back to TOC | Page 29


text. We can also exclude all the background tasks. The DMV gives a whole lot of other information
like wait types, wait time, etc. Below is the query and a screenshot of the results.
SELECT
session_id,
wait_type,
wait_time,
cpu_time,
eqp.query_plan,
est.[text]
FROM sys.dm_exec_requests er
CROSS APPLY sys.dm_exec_query_plan(er.plan_handle) eqp
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) AS est
WHERE session_id > 54
AND [status] NOT LIKE 'background'
ORDER BY cpu_time DESC

Procedure/Function/Trigger Level CPU Usage


Similar to the query stats DMV (which gives us query level info), we can also find out CPU
consumption of each stored procedure, trigger and user-defined functions by using
sys.dm_exec_procedure_stats, sys.dm_exec_trigger_stats and sys.dm_exec_function_stats DMVs,
respectively. All these DMVs have common columns that gives us CPU information:
total_worker_time, last_worker_time, min_worker_time & max_worker_time.

We can CROSS APPLY with sys.dm_exec_query_plan & sys.dm_exec_sql_text DMFs to extract the
query text and the query plan.

Here is one example using procedure stats.


SELECT
total_worker_time,
min_worker_time,
max_worker_time,
last_worker_time,
eqp.query_plan,
est.[text]
FROM sys.dm_exec_procedure_stats ps
CROSS APPLY sys.dm_exec_query_plan(ps.plan_handle) eqp
CROSS APPLY sys.dm_exec_sql_text(ps.sql_handle) AS est

This was just a level 100 introduction to extract workloads causing excessive CPU usage. What are
your techniques? Post/participate in the discussion: LinkedIn, Twitter, FB.

Investigate CPU Usage in SQL Server (Page 2 of 3) Back to TOC | Page 30


Questions? Comments? Talk to the author today. Satya Ramesh on Twitter.

About Satya Ramesh

Satya Ramesh has credible experience working with SQL Server. As a Senior
Consultant for SQLMaestros, he has worked on countless projects involving SQL
Server & Azure SQL Database.

LEARN MORE

Non-Tech World of Satya Ramesh


Satya likes to play cricket with his friends and loves to cook new dishes in his free time.

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

Investigate CPU Usage in SQL Server (Page 3 of 3) Back to TOC | Page 31


Ad

Back to TOC | Page 32


Learning
Opportunities

SQL Saturday Los Angeles


12 June 2021
data.SQL.Saturday.la is a free training event
for professionals who use the Microsoft data
platform

SQL Trail
15 June 2021
SQL Trail isn’t your average tech conference.
Built to mimic the “hallway track,” SQL Trail
is designed around a conversational model.
DATA SATURDAY - DATA ANZ
25 June 2021
Data Saturdays is a place for the data community
to run small regional events with little outlay
space yet!

Data Platform Virtual Summit


13-18 Sep 2021
Accelerating Data Driven Success

SQL Down Under Podcast


SQL Down Under is a podcast (audio show) for
SQL Server professionals

Video Channels

SQLServerGeeks

Back to TOC | Page 33


Want to list your event here? Just tag us in your tweets @SQLServerGeeks
Frequently Asked
Questions for SQL Server
on Linux/Containers
Amit Khandelwal | @amvin87

Y ou've been a SQL Server DBA for many years, and now you have a project to deploy SQL Server
containers as part of Data Estate modernization, or as part of the company vision to diversify
your environment, you need to deploy SQL Server on the Linux ecosystem, which could be on a
Virtual Machine or BareMetal running on-prem or on cloud like Azure. If you are this DBA now or could
be in few months of time, then this article is for you to get started on this wonderful journey.

In this article, I've attempted to answer some of the common questions that users have when they
first start working with SQL Server on Linux/Containers. We do have a document in place for SQL
Server on Linux FAQs; nonetheless, this article may contain some queries that are similar to those
already answered in the FAQ area. But the idea is to also provide you with the SQL Server containers
context for those questions.

Here, I attempt to answer some of the common questions that I've been asked at various conferences,
customer meetings, and by our support team when working with SQL Server on Linux/containers.
Wherever feasible, I've tried to respond to these queries using reference links to the documentation,
as these are continuously updated and maintained by us, hence ensuring that whenever you refer to
this article you get the updated information. Having stated that, it’s now time to let the questions
rolling:

• Is the SQL Server engine same across SQL Server deployed on any operating system or
environment?
Amit: The SQL Server engine is same across SQL Server deployed on different operating
systems, be it SQL Server on Linux, SQL Server on Windows, SQL Server on Linux based
containers deployed on standalone hosts or kubernetes platforms. The only difference is there
are few features which are currently not supported as of today on SQL Server on
Linux/containers; we are working to ensure that we soon bring parity for those features as
well. The current list of unsupported features for SQL Server on Linux or containers is available
at Unsupported features & services for your reference.

• Is SQL Server on windows-based containers supported for production?


Amit: As of today, when writing this article, SQL Server running on Windows container is not
supported for production workload. It was in preview mode for some time, but due to the
current ecosystem limitations, for now it is unsupported, and out of beta program as well.

Frequently Asked Questions for SQL Server on Back to TOC | Page 34


Linux/Containers (Page 1 of 4)
• How do I access SQL Server configuration manager for SQL Server on Linux, or in simple words
how do I configure SQL Server on Linux?
Amit: You can configure SQL Server on Linux using the mssql-conf tool. All the tasks such as
changing the SQL Server port, Enabling AD authentication, enabling/disabling trace flags and
more can be done through the mssql-conf tool. You can run the command “mssql-conf –help”
to understand the various settings available using the mssql-conf tool.

• Are SQL Server on Linux containers deployed on supported Linux distributions supported for
production workload? What are the support boundaries?
Amit: Yes, SQL Server on Linux container is supported for production workload. You can deploy
SQL Server on Linux based containers on any of the supported Linux distributions. You can
obtain the SQL Server images from the Microsoft container registry, to discover the Ubuntu
based SQL containers images you can refer to docker hub and to discover the Redhat based
containers you can refer to Redhat container catalog. SQL Server images can also be created
using custom docker file, but ensure that you follow the support guidelines available here
when you create the custom docker file.

• How do I create SQL Server Linux container image for SUSE based host? Is there a sample
dockerfile that I can refer to?
Amit: You can create SQL Server Linux container image for SUSE based host through dockerfile,
the sample for the same is available here for reference. This is also supported for production.

• Can I deploy SQL Server Linux containers using podman as well? like I do with docker.
Amit: Yes, Podman/docker both are supported and can be used to deploy and run SQL Server
containers.

• If I am new to Linux ecosystem but well versed with SQL Server, is there any tutorial that I can
refer to learn about SQL Server on Linux?
Amit: If you are planning to start working on SQL Server on Linux with previous experience on
SQL Server on Windows, you would find that SQL Server on Linux is not all that different, only
the setup experience and ecosystem experience changes. To get started, you can refer to this
tutorial to help you with the basics of Linux and then you can further build on the basics based
on your interest. Similarly, if you are well versed with Linux, but new to SQL Server you can
start with this tutorial.

• Can I attach/move/restore databases across SQL Server instances running on various


Operating systems or on containers?

Amit: Absolutely, the SQL Server engine is same across all deployments of SQL Server. So, you
can take the back up of a database from SQL Server Linux based containers, move it to the
Windows environment and then restore it on a SQL Server on Windows on bare metal/VM. In
fact, we have documentation available which talks about database migration to SQL on Linux.

• Is there a guided recommendation on how to deploy SQL Server on Linux for best
performance?
Amit: Yes, we have a detailed documentation available here which talks about various
performance configurations that can be set at Storage, filesystem, kernel, CPU and more for
optimal SQL Server performance. Most of the recommendations mentioned in this document
can also be applied to the host machine running SQL Server containers, like the storage setting

Frequently Asked Questions for SQL Server on Back to TOC | Page 35


Linux/Containers (Page 2 of 4)
can be applied on the host if the persistent storage is presented from the host, the tuned
profile setting can be applied to the host machine, etc.

• Can I go ahead, and limit resources assigned to SQL Server containers?


Amit: Yes, when deploying SQL Server containers, you can set resource limits like CPU and
memory for each SQL Server container that is deployed. The docker flags like ‘--cpus’ and ‘—
memory’ can be used to set the resource limits.
Let’s take an example to understand this further let’s say you have a total of 16 logical
processors on the host and when you deploy the container you use the following command,
docker run -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=strongpassword" -p 1433:1433 --name
sql1 --hostname sql1 --cpus 8 -d mcr.microsoft.com/mssql/server:2019-CU10-ubuntu-18.04

This command ensures you limit the SQL Server container to use only 8 logical processors out
of the 16, in the error log you will see that SQL Server can see all the 16 logical processors, but
it only utilizes 8 out of the 16 CPUs to run the workload.

• Can I deploy SQL Server on Kubernetes cluster and is this supported?


Amit: Yes, you can deploy SQL Server on Kubernetes cluster or opiniated kubernetes cluster
like openshift and it is supported for production workload. You can deploy SQL Server as
statefulset or deployment kind. The sample deployment yaml file is available here for
reference and can be used for all kubernetes clusters for deployment.

• How do I configure SQL Server on containers, can I use the mssql-conf tool?
Amit: For SQL Server containers, you can provide the configuration settings by mounting the
mssql-conf file inside the container when you deploy SQL Server containers. Some of these
configuration settings are also available as environment variables that can be set when
deploying the SQL Server containers. You can see the samples in the above referenced article.

• Do you have any sample Helm Charts to deploy SQL Server Containers?
Amit: Yes, you can deploy SQL Server via helm charts and here are some sample helm charts
to get you started, Please refer the statefulset helm chart for the statefulset deployment and
this for the normal deployment.

• Can we also setup the resource limits when deploying SQL Server Containers on Kubernetes
platform?
Amit: Absolutely, you can always use the resource limits option in the deployment yaml file
when deploying SQL Server containers. In fact, you should always try and ensure that QoS
(Quality of Service) for the SQL Server container/pod is set to guaranteed. This means that
SQL Server resource request and resource limits are the same to ensure SQL Server container
gets the resources when it is scheduled to run on the worker node by the kubernetes cluster.
• Can I configure AD (Active Directory) authentication for SQL Server on Linux/containers?
Amit: Active directory authentication is supported for both SQL Server on Linux and SQL Server
Linux based containers. In fact, you can now use a preview tool called adutil to easily configure
the active directory authentication for both SQL Server on Linux and SQL Server Linux based
containers. This tool eases the configuration of the active directory authentication by ensuring
that you can manage the windows active directory from a Linux machine which is joined to
the domain.

Frequently Asked Questions for SQL Server on Back to TOC | Page 36


Linux/Containers (Page 3 of 4)
When enabling AD authentication for SQL Server on Linux containers you can have an
environment where the host machine running the SQL Server container is not joined to the
domain, though the SQL Server inside the container is joined to the domain. But, for SQL
Server on Linux BareMetal/VM you need to ensure that both the host and the SQL Server
service are part of the same domain. Cross domain logins are also supported provided both
the domains have two way trust and are part of the same forest.

• I want to setup Always on Availability group between multiple SQL Server containers running
on the same kubernetes cluster, how do I do it?
Amit: As of today, when writing this article SQL Server availability group setup on containers
is only supported in Read Scale mode and not in any other mode. Hence, you have a DR option
using always on availability groups for SQL Server on containers but not HA (High Availability).
You can follow this blog to setup Read scale always on availability group on SQL Server
containers running in kubernetes.

• If I have more questions, where can I post them or can I write them directly to you?
Amit: Yes, Please send your questions on SQL Server on Linux or containers to
[email protected] or follow me on linkedin.com/in/amvin87/ or on
twitter.com/amvin87 I’d be more than happy to assist.

Questions? Comments? Talk to the author today. Amit Khandelwal on Twitter.

About Amit Khandelwal

I am currently working as Senior Program Manager with SQL Server team


focusing on SQL on Linux and containers.

LEARN MORE

Non-Tech World of Amit Khandelwal


During his free time, Amit loves to spend time reading books for his daughter and playing board
games with her.

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

Frequently Asked Questions for SQL Server on Back to TOC | Page 37


Linux/Containers (Page 4 of 4)
#SQLFamily
Beyond SQL

Daily coping tips by Steve Jones

Is Kevin Chant looking for a new


campervan for the next
DataWeekender?

Folding@Home
A Distributed Computing Project for
Simulating Protein Dynamics

Glenn Berry

Want to be featured here? Just tag @SQLServerGeeks in your tweet.


Ad

Back to TOC | Page 39


SQL Server and
JSON
Martin Catherall | @MartyCatherall

I t was a huge honour to be asked to write a piece for DataPlatformGeeks – and I pondered a subject
for a while.
Over the years SQL Server has introduced several pieces of functionality that have really benefited
users – with the vast majority having good uptake. One of the things that I love about Microsoft
products is that when they see their users actively using a feature, they give that feature a bit of extra
care and attention based on feedback from those users.

One of the recent features that I’ve found myself using more times that I initially suspected is JSON –
or JavaScript Object Notation, to give it it’s full name.

The ability to manipulate JSON inside SQL Server first appeared in the 2016 release. As you are
probably already aware, JSON is a open standard file format for data interchange.

My initial thoughts on this was that it might be similar to the XML integration that SQL Server has had
since 2005 – although that’s pretty robust, it can take a lot of learning.

That hasn’t been my exp experience with JSON. While there was obviously some learning involved I
thought that it was quite intuitive and – if you have experience with any other JSON parser – very easy
to get started.

Let’s dive in and have a look.

Firstly – and perhaps initially surprisingly, there is no JSON data type in SQL Server. It’s just a plain old
text string.

However, SQL does give us a method of checking that the string is valid JSON
SELECT ISJSON(N'
{
"configuration_id": 101,
"name": "recovery interval (min)",
"value": 0,
"minimum": 0,
"maximum": 32767,
"value_in_use": 0,
"description": "Maximum recovery interval in minutes",
"is_dynamic": true,
"is_advanced": true
}

SQL Server and JSON (Page 1 of 3) Back to TOC | Page 40


') AS [Is Valid Json];
GO
SELECT ISJSON(N'I love data platform geeks') AS [Is Valid Json];
GO

This means that if we decide to store JSON in a column we can introduce a column constraint to ensure
that the text we are inserting is in fact valid JSON

So, let’s create a database to play with, along with a table that has a column to store valid JSON
USE tempdb;
GO

IF EXISTS (SELECT 1 FROM sys.databases AS dbs WHERE dbs.[name] =


'JSONForSQLFolks')
BEGIN
ALTER DATABASE JSONForSQLFolks SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
DROP DATABASE JSONForSQLFolks;
END
GO
CREATE DATABASE JSONForSQLFolks;
GO
USE JSONForSQLFolks;
GO
CREATE TABLE dbo.JdataWithCheck
(
SomeJSONData NVARCHAR(MAX) NOT NULL
CONSTRAINT CheckJSON CHECK (ISJSON(SomeJSONData) = 1)
);
GO

Also, if we want to retrieve valid JSON from the database, then we have some syntax that we can use
directly for that
FOR JSON AUTO

FOR JSON PATH;

Let’s have a look at thoses

FOR JSON AUTO is pretty straightforward.

In this example, we’ll simply pull back some data from the sys.configurations table
SELECT
[Configuration_Property.configuration_id] = C.configuration_id
,[Configuration_Property.Configuration name] = C.[name]
,[Configuration_Property.Value] = C.[value]
,[Configuration_Property.minimum] = C.minimum
,[Configuration_Property.maximum] = C.maximum
,[Configuration_Property.value_in_use] = C.value_in_use
,[Configuration_Property.description] = C.[description]
,[Configuration_Property.is_dynamic] = C.is_dynamic
,[Configuration_Property.is_advanced] = C.is_advanced
FROM
sys.configurations AS C
ORDER BY
C.configuration_id
FOR JSON AUTO;

SQL Server and JSON (Page 2 of 3) Back to TOC | Page 41


You might notice that SSMS just brings back the column as a standard text column – which it is. Doble
clicking on the column will show us the string in a new tab. It’d be nice if this formatted the JSON.

This experience is much nicer in Azure Data Studio – Where clicking on the returned column will open
a new window with the JSON nicely formatted.

FOR JSON PATH Will give us a little more control over the format of the JSON document. Notice in the
example below, we have chosen to format the JSON slight differently – by simple naming.
SELECT
[Configuration_Property.configuration_id] = C.configuration_id
,[Configuration_Property.Configuration name] = C.[name]
,[Configuration_Property.Value] = C.[value]
,[Configuration_Property.minimum] = C.minimum
,[Configuration_Property.maximum] = C.maximum
,[Configuration_Property.value_in_use] = C.value_in_use
,[Configuration_Property.description] = C.[description]
,[Configuration_Property.is_dynamic] = C.is_dynamic
,[Configuration_Property.is_advanced] = C.is_advanced
FROM
sys.configurations AS C
ORDER BY
C.configuration_id
FOR JSON PATH;

Having, touched on some of the simpler aspects of JSON inside SQL Server, we’ve set the stage to now
get more advanced.

Questions? Comments? Talk to the author today. Martin Catherall on Twitter.

About Martin Catherall


Martin has over 15 years of experience working with data- driven applications
built on SQL Server and Microsoft Data Platform technology. He has substantial
experience as a developer, database administrator and consultant.

LEARN MORE

Non-Tech World of Martin Catherall

Martin likes playing the electric guitar in his spare time - His family, however, are not so keen

Want to write for the magazine? Comments? Feedback? Reach out to us at


[email protected]

SQL Server and JSON (Page 3 of 3) Back to TOC | Page 42


SQLServerGeeks
Virtual Symposium
recordings – SQL
Server & Azure SQL
SQLServerGeeks Team is delighted
to inform you about the successful
execution of our Virtual
Symposium on SQL Server & Azure
SQL – May 2021. And guess what,
the recordings are now published!!!

With sessions delivered by Microsoft Data Platform MVPs, across three time zones, on
a wide range of topics, we made sure to bring you quality content at your most
convenient of hours, ensuring efficient learning with minimum distractions. This year,
the symposium had three editions: US, EMEA & APAC.

The turnout was amazing, and so were the Feedbacks from the Attendees . It was very
reassuring to see such an active involvement during these trying time s of a global
pandemic - which goes to demonstrate the ever-learning nature of the SQL community.

Even though the Virtual Symposium turned out to be all that it promised, we wanted to
take it a step further. With changes in lifestyle and schedules, the co mmon man of the
present day is in a constant state of flux and adaptation.

We here at SQLServerGeeks empathise with the current situation and have taken
necessary steps to ensure that you are always a click away from a library of well -curated
content, so that you never compromise on learning.

So, if you are amongst those who, (for whatever reason) were unable to attend the
sessions of the Virtual Symposium, we bring you the opportunity to re -live the
experience, with the release of the Session Recordings & Resources - Absolutely Free!

Session Recordings | Session Resources – Become a FREE member and access it all.

So if you haven't already, please sign up for a Free Membership and get full-access to
the Symposium Recordings and all the latest content from the world of SQL that we
continue to release in the days to come. Happy Learning Folks!

Go To Recordings

Back to TOC | Page 43


DPS 2020
FREE content
every day
The DPS Team is progressively releasing DPS 2020 content for the community. You can have free
access and watch the sessions on-demand. Each day, new content, for last year's conference.

Sessions Released till June 7, 2021


Paginated Reports: the New Old Operational Reporting Platform by Paul Turley
Azure AI, Power new possibilities for every organization by Lindsey Allen
AI Builder: AI in Power Apps and Power Automate by Leila Etaati
What You Can Learn the Power BI Activity Log and REST APIs by Melissa Coates
SQL Server Encryption Unplugged by Ben J Miller
Introducing Graph Databases with Azure Cosmos DB by Will Velida
AI-Powered SharePoint Intranets by Stefano Tempesta
Matters of Concurrency by Louis Davidson
XMLA Read-Write Endpoint: The Cornerstone for Power BI as An Enterprise BI Solution by Ferenc Csonka
Advanced Storage Troubleshooting for SQL Server by Argenis Fernandez
My Top 5 Omissions from Azure SQL Database Applications and How To Fix Them by Martin Cairney
Manage Your Power Automate Governance Like a Rockstar by Haniel Croitoru
AI and Analytics with Apache Spark And Azure Databricks by Andrew Brust
How I Reduced My Power BI Dataset By 60% by Gilbert Quevauvilliers
Microsoft SQL Server - In-Memory OLTP Design Principles by Torsten Strauss
Architecting enterprise-grade data pipelines with Azure Data Factory by Abhishek Narain
Containers - What's Next? by Anthony Nocentino
Data Stewardship in An AI-Driven Ecosystem: InterpretML, FairLearn, WhiteNoise by Alicia Moniz
Working aith Different Power BI Data Model Architectures by Peter Myers
Global Analytics with Azure Cosmos Db and Synapse Analytics by Warner Chaves
Inside Waits, Latches, and Spinlocks Returns by Bob Ward
Azure Arc Enabled SQL Server by Sasha Nosov

Learn More

Back to TOC | Page 44


Learn More
Back to TOC | Page 45

End of June 2021 edition

You might also like