SQLServerGeeks Magazine June 2021
SQLServerGeeks Magazine June 2021
Website
Magazine Linkedin
Using PowerShell as #DPS2021 Lucky Take Action for Did You Know?
DBA Draw Equity, Diversity &
Inclusion
14 a 19 a 21 a 27 a
29 a 33 a 34 a 38 a
40 a 43 a 44 a
Initial Thoughts
Working as a DBA you are most often faced with multiple instances and multiple databases. Your job
is to ensure that the database servers stay running and the databases are accessible. The tools that
are in the trade of DBA consist of SQL Server Management Studio (SSMS), Azure Data Studio (ADS) or
if you are a command line person, you could use sqlcmd at times. If you are new to being a DBA,
PowerShell is becoming a standard tool as well. Some of us old ones are embracing PowerShell as a
key tool in our toolbelts. Now is the time to ensure that you have the right tools and know how to use
them. This article will illustrate some of the reasons and ways to use PowerShell as a DBA to make life
just a little more enjoyable and your work a little more scalable.
PowerShell Introduction
When you are thinking of PowerShell, what is the first thing you think of? Windows Administration.
Another language you need to learn, or are you thinking, I am glad this tool can help me be a DBA? I
can tell you if the first 2 are how you think, it may be a challenge to get out of that mindset. But if you
are thinking of the 3rd item, you may have a chance to get things moving in the right direction.
PowerShell is a tool that Microsoft created for Windows Administration first, and we can use it as
DBAs because it is built upon the .NET Framework which means that anything in .NET, we can use in
PowerShell. In fact, SMO (Shared Management Objects) are built using .NET so the library and all her
objects can be used in PowerShell to help administer the databases and their servers. Think of all the
things you can do with SMO that will help you get more for your time. Think of the projects out there
that leverage this set of libraries to do work for us, to give us a command instead of writing lots of
code when we want to do something. DBAtools, an opensource module, is one of those tools we use
as DBAs, as well as the SqlServer module from Microsoft. Both have a foundation in SMO and give you
a command-based toolset that will let you get information as well as change objects in the databases
and servers.
These are only a few that are core to our work, but there are many more as well. Each will be addressed
and some others may be introduced as well. Take a look at each section below to see how these
). You now have to have the assembly where you can reference it. So you can either reference the
assemblies in the SSMS space (C:\Program Files (x86)\Microsoft SQL Server Management Studio
18\Common7\IDE) and do:
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio 18\Common7\IDE\Microsoft.SqlServer.Smo.dll”
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio
18\Common7\IDE\Microsoft.SqlServer.Smo.Extended.dll”
PS:>Add-Type -Path “C:\Program Files (x86)\Microsoft SQL Server
Management Studio 18\Common7\IDE\Microsoft.SqlServer.SqlEnum.dll”
The other way is to use the modern modules DBAtools or SqlServer. If you don’t have these installed
and you have PowerShell 5.1 installed (or are on Windows 10, because 5.1 is native there) you can do
the following:
PS:>Install-Module SqlServer -AllowClobber
PS:>Install-Module dbatools
Note: The reason you would specify -AllowClobber is if you have SQL Server installed on that server
because it has SQLPS module that it installs for compatibility with SQLAgent for now.
Or if you already have them installed and want to update them (if you have both installed, otherwise
just specify the one you have installed):
PS:>Update-Module SqlServer, dbatools
These modules have the SMO libraries embedded and will side load them when the module is
imported using Import-Module dbatools. Or the same thing with the SqlServer module
Scenario 1
I have a database named Database1 on SqlServer1 and it has 2 files, 1 is a data file with the internal
name being “Database1_Data” and a log file with the internal name “Database1_log”. I need to grow
the Data File by 1GB and I need to grow the Log file by 512MB.
And the Log File gets better, except it is harder in this case. The Expand-DbaDbLogFile expects a
TargetLogSize instead of just an increment that you want to add to it, so you have to go through more
steps.
PS:> Import-Module dbatools
PS:> $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS:> Expand-DbaDbLogFile -SqlInstance SqlServer1 -Database Database1
-TargetLogSize ($db.LogFiles[0].Size+(512KB)) -Confirm:$false
SMO Solution
This will look familiar in SMO as these types of changes basically stay the same.
PS:> $db = Get-DbaDatabase -SqlInstance SqlServer1 -Database
Database1
PS:> $db.CompatibilityLevel = “Version140”
PS:> $db.Alter()
Summary
Performing your DBA duties can be done a few ways, even using SSMS, but the point of this article is
to show that you can use PowerShell to change database settings or even using modules to make them
simpler. PowerShell can do much more than you see here, but this should be a good taste of what you
can do. Join me in the quest to become a PowerShell DBA.
LEARN MORE
This is your only chance to win a full-access Summit pass - which includes all Summit
features such as Breakouts, Data + AI Gurukul, Technical Round Tables, AMA Sessions,
Panel Discussions, Community Zone & much more!
The Lucky Draw works on a SQL algorithm, which is responsible for selecting t he winners
– every week! Yes, each week we are announcing 'multiple' winners. While registering
for the Lucky Draw, you need to provide your information just once. On the off chance
that you are not selected as the winner in a specific week, your participa tion will be
automatically forwarded to the draws taking place in the following week(s). It is
important to note that the Lucky Draw and the resulting free pass is exclusively for the
LIVE attendees of the Summit only . In case you have already booked the S ummit ticket
and are also chosen as the winner, then a full refund will be provided by the DPS Team.
So worry not, if your luck is running low at the moment, we made sure that the odds
are still stacked in your favour! On that note, consider giving the Lad y Luck a chance
and submit your participation at the earliest by visiting -
DPS 2021 LUCKY DRAW
E very day I live to make the world better. My community is important, and the people I identify
with depend on me to lead with them in mind. Being a black American female and the first
member to graduate college in my immediate family, I have awaited this time when diversity,
equity, and inclusion are being heard and talked about more often in workplaces influenced by white
America. This article breaks down the meaning of each word and provides action steps that you, as an
individual, can do to change your professional environment and support the movement for a better
society.
Diversity:
Your organization is diverse when it includes people from different social/ethnic backgrounds, sexual
orientations, genders, and people with disabilities. A diverse working environment will have a greater
variety of thoughts and a superior blend of foundations and experience. This prompts better thoughts
and procedures for making progress in business and projects. Being with individuals of different
backgrounds with various life encounters can produce thoughts or points of view that others might
not have considered or thought about. Everybody has their way of approaching an issue - thoughts
are shaped by an individual’s experiences and the perspective they see. Rather than everybody
contributing similar considerations and solutions, a diverse interpretation and approach can lead to
creativity and innovation.
Equity:
People who do not face challenges because of their identity have a head start on the ones who do.
This should not imply that the person who is behind cannot compensate for any shortcomings - in
most cases, people overcome even though the chances may not be favourable based on societal
norms.
Take Action for Diversity, Equity, & Inclusion (Page 1 of 2) Back to TOC | Page 10
Equity is ensuring that the challenges are not stacked, to begin with - it is to make changes so
everybody has a similar chance to succeed. To have equity in your organization, there must be an
understanding of what everyone needs and wants to be successful in that work environment. When
you give everyone the same thing, it’s known as equality; and equality could lead to boredom and job
switching. Try elevating equity and give everyone what they truly need to be happy and productive at
work.
Inclusion:
Inclusion births acceptance. We are at our best when we are accepted as ourselves. To be your
genuine self, a person must feel included. Inclusion recognizes how much people can use their voice,
make decisions in a group, and move up in leadership roles in an organization. When a person is
included, they have a feeling of belonging that drives results as they begin to collaborate in teams and
provide new ideas.
LEARN MORE
When DeNisha is not working, she enjoys mentoring college students and traveling the world to
experience new things.
Take Action for Diversity, Equity, & Inclusion (Page 2 of 2) Back to TOC | Page 11
Ad
C
onsider this query:
SELECT * FROM tbl WHERE col LIKE '%alter%'
The table has some size and the text in the column is long and the query is slow. What
options are there to speed it up?
The best option in many situations is to use full-text search, which is a feature that ships with SQL
Server (although, at Setup time it is optional to install full-text support). With a full-text index in place
the query can be written as:
SELECT * FROM tbl WHERE CONTAINS(col, 'alter')
Unfortunately, full-text is not an option if users want to find alter also if it is in the middle of a word
such as inalterable. This is because full-text builds an index on words. Likewise, full-text is not an
option, if users want include punctuation characters in their search strings, because full-text strips
those out.
There is a second option to build a fast index-based solution, by using something known as n-grams,
but there is no built-in support for this in SQL Server and it is very heavy artillery. Thus, in most cases
you will have to let it suffice with LIKE and wildcard patterns. You may have understood that one
reason LIKE searches with leading wildcards are slow is that no index can be used, so they result in a
scan.
However, there is a second reason why these searches are slow, and I like to share a tip on how you
easily can factor out that part to speed things up a little bit.
To understand this second reason, we start by creating this somewhat non-sensical table:
CREATE TABLE guids (
Windows_nvarchar nvarchar(200)
SQL_nvarchar nvarchar(200)
Windows_varchar varchar(200)
SQL_varchar varchar(200)
You may note that there is one pair of nvarchar columns and one pair of varchar columns, likewise
there is one pair of columns with a Windows collation and one pair of columns with an SQL collation.
FROM sys.columns a
@cnt int
When I run the same search for the columns SQL_nvarchar and Windows_varchar, the result is
similar:
SQL_nvarchar: 3752 ms. Count: 23239
Why does this happen? With a LIKE search with a leading wildcard, SQL Server does not only have to
scan the index; it also has to scan the strings. For a column which does not include the search string,
SQL Server has to scan the string from start almost to the end to see if there is a character that is equal
to a according to the current collation. Depending on the type of collation, that could be an A, á or Å.
Then again, if the collation is accent-sensitive, an a followed by a combining acute accent is not a
match, because logically that is the same as á. You may not have heard of combining accents before,
but they are a feature of Unicode, and applying the full rules of Unicode is quite complex, and when
SQL Server has to do this for every character, it starts to add up.
This obviously happens if you have nvarchar, since that is Unicode. But it also happens if you have a
Windows collation and varchar, because with a Windows collation all operations are performed in
Unicode also for varchar.
With an SQL collation and varchar, this is different. The definition of varchar in SQL collations goes
back to SQL 6.5 and earlier when SQL Server did not support Unicode. For this reason, SQL collations
have a completely different library with its own set of rules. And particularly, this library only has to
bother about the 255 characters in the code page that the SQL collation supports for varchar and
therefore character comparison is a much simpler and faster business.
So is the lesson that we should use SQL collations with varchar for LIKE searches? Not really. If you
design a new application, there is no reason not to design it for full international support to be future-
proof. This means that you should use nvarchar, or, if you are on SQL 2019 or Azure, use varchar with
a UTF8 collation. But varchar with an SQL collation is a poor choice.
However, there is a second way to avoid the complex Unicode rules: cast the search column to a binary
collation:
@cnt int
PRINT concat('Binary collation: ', datediff(ms, @d, sysdatetime()), ' ms. ',
With a binary collation, all that matters is the code point. But this also has the side effect that the
comparison is case-, accent- and everything-else sensitive. You can see that the count is 0, and this is
due to that the search string is lowercase but the column data is all uppercase.
@cnt int
LIKE upper(@searchstr)
It is almost five times faster than the original search! And notice that this works no matter the column
is varchar or nvarchar. It is still not going be blazingly fast, because there is still an index or table scan.
But at least we have reduced the cost of the string scan – and with very simple measures.
I should add the disclaimer that this is not 100 % faithful to the result you will get with a case-
insensitive collation, but there are situations where you will get different results. However, these cases
are quite much corner cases, and I would suggest that it will be good enough. On the other hand, if
the users also require the search to be accent-insensitive so that resume matches résumé, this trick is
not an option for you.
Closing note: above I mentioned n-grams as a solution to this problem. This is actually something I
have written about, and it is published as a chapter in the book SQL Server MVP Deep Dives, where a
number of us MVPs wrote one or two chapters about our favourite topics. We do not make any money
from this book, as our royalties go to War Child International, a network of independent organisations,
working across the world to help children affected by war. If you want a really fast implementation for
this type of searches, you may want to check out that book. But as I said, it is heavy artillery. By the
way, the same book also includes an excellent introduction to full-text indexes, written by Robert C.
Cain.
LEARN MORE
Erland plays bridge (when there is not a pandemic), enjoys travelling (when there is not a pandemic),
and in the far too short Swedish summer he goes hiking and biking in the forests and around the lakes
around Stockholm.
Microsoft.Data.SqlClient 2.1.3
D ata Security is one of the major important area of any business now days. We know, data size
is growing rapidly in almost every type of business which makes it more important to protect
data. There is various type of features in SQL Server which help us to apply security layers to
protect data for example: Transparent Data Encryption, Backups Encryption, Always Encryption, Row
level Security etc. Before implementing these features, it is also important to identity what type of
data is stored in databases then we can decide what security layer is required to protect that data.
This is the area where Data discovery and classifications comes into picture.
Data Discovery and classification is the process of scanning your data and classify that data by apply
proper labels based on predefined set of patterns, keywords, or rules.
To use this feature, you can right click on your database in SSMS -> Tasks -> Data Discovery and
Classification.
Here we don’t have any column classified but we have some recommendations. We can get details of
all recommendations by clicking on top or bottom message.
Predefined default Information types are as mention below. Using these information types you can
categorize your data easily. if you want to change Information Type for any column in
recommendation list then you can change that from drop down.
Once you are done by accepting/modifying recommendations then click on “Accept selected
recommendations” button.
After Accepting Recommendations, save button will be enabled and click on “Save” to save all these
recommendations.
If there is some column which is not in recommendation list, then can be added manually by clicking
on “Add Classification” option next to “Save”.
Same way information types can also be modified or add based on organization standard. Here under
Information type, patterns are defined to provide recommendations. You can modify or add new
patterns. This is something easy to manage.
If you are using SQL Server 2019 then you can use sys.sensitivity_classifications system catalog view
to get classified column details. You can also add new classified column(s) using ADD SENSITIVITY
CLASSIFICATION syntax.
Conclusion:
Data Discovery & Classification in SQL Server Management Studio and SQL Server can play a very
important role to categorize all your data inside databases. This will help in various ways to database
security and policies.
LEARN MORE
During his free time, Prince loves cooking & watching historical movies.
Data Platform Virtual Summit will run from Sep 13 to 18. Pre-Cons on Sep 8 & Sep 9.
Post-Cons on Sep 20 & Sep 21.
A 100% technical learning event with 150+ Breakout Sessions, 20+ Training Classes, 100+ World’s Best
Educators & 54 hours of conference sessions , makes DPS 2021 one of the largest online learning
Virtual World
of DPS
Last year, Data Platform Summit
transitioned into a virtual event. We
brought you 30+ Training Classes, 200+
Breakout Sessions, 170+ World’s Best
Educators, 48 hours of Pre-Cons, 48
hours of Post-Cons & 72 hours of non-
stop conference sessions – DPS 2020
was the largest online learning event
on Microsoft Azure Data, Analytics &
Artificial Intelligence.
Breakout Session Room
For your comfort, we will cover all time zones,
running continuously. The event is coming to
your country, your city, your home! So, now
being virtual, DPS has a bigger participation from
Microsoft Redmond based Product Teams &
worldwide MVPs.
Round Tables
I n this article, we will see a few scripts that help us to find CPU usage using DMVs & DMFs. These
will be quite handy in day-to-day DBA query tuning efforts.
SQL Server Dynamic Management Views (DMVs) & Dynamic Management Functions (DMFs) are
used to investigate the health of SQL Server engine. The metrics & data items produced by them are
very useful in analyzing and fixing performance problems.
In the above query, if we change the column in the ORDER BY clause to total_logical_reads column
then we can find out TOP 10 I/O intensive queries (from a read perspective). If we do a sort on the
total_grant_kb column, then we can find out top queries that are asking for extra memory grant. If
we do a sort on the [TotalDuration_ms] column, then we can extract long-running queries.
We can CROSS APPLY with sys.dm_exec_query_plan & sys.dm_exec_sql_text DMFs to extract the
query text and the query plan.
This was just a level 100 introduction to extract workloads causing excessive CPU usage. What are
your techniques? Post/participate in the discussion: LinkedIn, Twitter, FB.
Satya Ramesh has credible experience working with SQL Server. As a Senior
Consultant for SQLMaestros, he has worked on countless projects involving SQL
Server & Azure SQL Database.
LEARN MORE
SQL Trail
15 June 2021
SQL Trail isn’t your average tech conference.
Built to mimic the “hallway track,” SQL Trail
is designed around a conversational model.
DATA SATURDAY - DATA ANZ
25 June 2021
Data Saturdays is a place for the data community
to run small regional events with little outlay
space yet!
Video Channels
SQLServerGeeks
Y ou've been a SQL Server DBA for many years, and now you have a project to deploy SQL Server
containers as part of Data Estate modernization, or as part of the company vision to diversify
your environment, you need to deploy SQL Server on the Linux ecosystem, which could be on a
Virtual Machine or BareMetal running on-prem or on cloud like Azure. If you are this DBA now or could
be in few months of time, then this article is for you to get started on this wonderful journey.
In this article, I've attempted to answer some of the common questions that users have when they
first start working with SQL Server on Linux/Containers. We do have a document in place for SQL
Server on Linux FAQs; nonetheless, this article may contain some queries that are similar to those
already answered in the FAQ area. But the idea is to also provide you with the SQL Server containers
context for those questions.
Here, I attempt to answer some of the common questions that I've been asked at various conferences,
customer meetings, and by our support team when working with SQL Server on Linux/containers.
Wherever feasible, I've tried to respond to these queries using reference links to the documentation,
as these are continuously updated and maintained by us, hence ensuring that whenever you refer to
this article you get the updated information. Having stated that, it’s now time to let the questions
rolling:
• Is the SQL Server engine same across SQL Server deployed on any operating system or
environment?
Amit: The SQL Server engine is same across SQL Server deployed on different operating
systems, be it SQL Server on Linux, SQL Server on Windows, SQL Server on Linux based
containers deployed on standalone hosts or kubernetes platforms. The only difference is there
are few features which are currently not supported as of today on SQL Server on
Linux/containers; we are working to ensure that we soon bring parity for those features as
well. The current list of unsupported features for SQL Server on Linux or containers is available
at Unsupported features & services for your reference.
• Are SQL Server on Linux containers deployed on supported Linux distributions supported for
production workload? What are the support boundaries?
Amit: Yes, SQL Server on Linux container is supported for production workload. You can deploy
SQL Server on Linux based containers on any of the supported Linux distributions. You can
obtain the SQL Server images from the Microsoft container registry, to discover the Ubuntu
based SQL containers images you can refer to docker hub and to discover the Redhat based
containers you can refer to Redhat container catalog. SQL Server images can also be created
using custom docker file, but ensure that you follow the support guidelines available here
when you create the custom docker file.
• How do I create SQL Server Linux container image for SUSE based host? Is there a sample
dockerfile that I can refer to?
Amit: You can create SQL Server Linux container image for SUSE based host through dockerfile,
the sample for the same is available here for reference. This is also supported for production.
• Can I deploy SQL Server Linux containers using podman as well? like I do with docker.
Amit: Yes, Podman/docker both are supported and can be used to deploy and run SQL Server
containers.
• If I am new to Linux ecosystem but well versed with SQL Server, is there any tutorial that I can
refer to learn about SQL Server on Linux?
Amit: If you are planning to start working on SQL Server on Linux with previous experience on
SQL Server on Windows, you would find that SQL Server on Linux is not all that different, only
the setup experience and ecosystem experience changes. To get started, you can refer to this
tutorial to help you with the basics of Linux and then you can further build on the basics based
on your interest. Similarly, if you are well versed with Linux, but new to SQL Server you can
start with this tutorial.
Amit: Absolutely, the SQL Server engine is same across all deployments of SQL Server. So, you
can take the back up of a database from SQL Server Linux based containers, move it to the
Windows environment and then restore it on a SQL Server on Windows on bare metal/VM. In
fact, we have documentation available which talks about database migration to SQL on Linux.
• Is there a guided recommendation on how to deploy SQL Server on Linux for best
performance?
Amit: Yes, we have a detailed documentation available here which talks about various
performance configurations that can be set at Storage, filesystem, kernel, CPU and more for
optimal SQL Server performance. Most of the recommendations mentioned in this document
can also be applied to the host machine running SQL Server containers, like the storage setting
This command ensures you limit the SQL Server container to use only 8 logical processors out
of the 16, in the error log you will see that SQL Server can see all the 16 logical processors, but
it only utilizes 8 out of the 16 CPUs to run the workload.
• How do I configure SQL Server on containers, can I use the mssql-conf tool?
Amit: For SQL Server containers, you can provide the configuration settings by mounting the
mssql-conf file inside the container when you deploy SQL Server containers. Some of these
configuration settings are also available as environment variables that can be set when
deploying the SQL Server containers. You can see the samples in the above referenced article.
• Do you have any sample Helm Charts to deploy SQL Server Containers?
Amit: Yes, you can deploy SQL Server via helm charts and here are some sample helm charts
to get you started, Please refer the statefulset helm chart for the statefulset deployment and
this for the normal deployment.
• Can we also setup the resource limits when deploying SQL Server Containers on Kubernetes
platform?
Amit: Absolutely, you can always use the resource limits option in the deployment yaml file
when deploying SQL Server containers. In fact, you should always try and ensure that QoS
(Quality of Service) for the SQL Server container/pod is set to guaranteed. This means that
SQL Server resource request and resource limits are the same to ensure SQL Server container
gets the resources when it is scheduled to run on the worker node by the kubernetes cluster.
• Can I configure AD (Active Directory) authentication for SQL Server on Linux/containers?
Amit: Active directory authentication is supported for both SQL Server on Linux and SQL Server
Linux based containers. In fact, you can now use a preview tool called adutil to easily configure
the active directory authentication for both SQL Server on Linux and SQL Server Linux based
containers. This tool eases the configuration of the active directory authentication by ensuring
that you can manage the windows active directory from a Linux machine which is joined to
the domain.
• I want to setup Always on Availability group between multiple SQL Server containers running
on the same kubernetes cluster, how do I do it?
Amit: As of today, when writing this article SQL Server availability group setup on containers
is only supported in Read Scale mode and not in any other mode. Hence, you have a DR option
using always on availability groups for SQL Server on containers but not HA (High Availability).
You can follow this blog to setup Read scale always on availability group on SQL Server
containers running in kubernetes.
• If I have more questions, where can I post them or can I write them directly to you?
Amit: Yes, Please send your questions on SQL Server on Linux or containers to
[email protected] or follow me on linkedin.com/in/amvin87/ or on
twitter.com/amvin87 I’d be more than happy to assist.
LEARN MORE
Folding@Home
A Distributed Computing Project for
Simulating Protein Dynamics
Glenn Berry
I t was a huge honour to be asked to write a piece for DataPlatformGeeks – and I pondered a subject
for a while.
Over the years SQL Server has introduced several pieces of functionality that have really benefited
users – with the vast majority having good uptake. One of the things that I love about Microsoft
products is that when they see their users actively using a feature, they give that feature a bit of extra
care and attention based on feedback from those users.
One of the recent features that I’ve found myself using more times that I initially suspected is JSON –
or JavaScript Object Notation, to give it it’s full name.
The ability to manipulate JSON inside SQL Server first appeared in the 2016 release. As you are
probably already aware, JSON is a open standard file format for data interchange.
My initial thoughts on this was that it might be similar to the XML integration that SQL Server has had
since 2005 – although that’s pretty robust, it can take a lot of learning.
That hasn’t been my exp experience with JSON. While there was obviously some learning involved I
thought that it was quite intuitive and – if you have experience with any other JSON parser – very easy
to get started.
Firstly – and perhaps initially surprisingly, there is no JSON data type in SQL Server. It’s just a plain old
text string.
However, SQL does give us a method of checking that the string is valid JSON
SELECT ISJSON(N'
{
"configuration_id": 101,
"name": "recovery interval (min)",
"value": 0,
"minimum": 0,
"maximum": 32767,
"value_in_use": 0,
"description": "Maximum recovery interval in minutes",
"is_dynamic": true,
"is_advanced": true
}
This means that if we decide to store JSON in a column we can introduce a column constraint to ensure
that the text we are inserting is in fact valid JSON
So, let’s create a database to play with, along with a table that has a column to store valid JSON
USE tempdb;
GO
Also, if we want to retrieve valid JSON from the database, then we have some syntax that we can use
directly for that
FOR JSON AUTO
In this example, we’ll simply pull back some data from the sys.configurations table
SELECT
[Configuration_Property.configuration_id] = C.configuration_id
,[Configuration_Property.Configuration name] = C.[name]
,[Configuration_Property.Value] = C.[value]
,[Configuration_Property.minimum] = C.minimum
,[Configuration_Property.maximum] = C.maximum
,[Configuration_Property.value_in_use] = C.value_in_use
,[Configuration_Property.description] = C.[description]
,[Configuration_Property.is_dynamic] = C.is_dynamic
,[Configuration_Property.is_advanced] = C.is_advanced
FROM
sys.configurations AS C
ORDER BY
C.configuration_id
FOR JSON AUTO;
This experience is much nicer in Azure Data Studio – Where clicking on the returned column will open
a new window with the JSON nicely formatted.
FOR JSON PATH Will give us a little more control over the format of the JSON document. Notice in the
example below, we have chosen to format the JSON slight differently – by simple naming.
SELECT
[Configuration_Property.configuration_id] = C.configuration_id
,[Configuration_Property.Configuration name] = C.[name]
,[Configuration_Property.Value] = C.[value]
,[Configuration_Property.minimum] = C.minimum
,[Configuration_Property.maximum] = C.maximum
,[Configuration_Property.value_in_use] = C.value_in_use
,[Configuration_Property.description] = C.[description]
,[Configuration_Property.is_dynamic] = C.is_dynamic
,[Configuration_Property.is_advanced] = C.is_advanced
FROM
sys.configurations AS C
ORDER BY
C.configuration_id
FOR JSON PATH;
Having, touched on some of the simpler aspects of JSON inside SQL Server, we’ve set the stage to now
get more advanced.
LEARN MORE
Martin likes playing the electric guitar in his spare time - His family, however, are not so keen
With sessions delivered by Microsoft Data Platform MVPs, across three time zones, on
a wide range of topics, we made sure to bring you quality content at your most
convenient of hours, ensuring efficient learning with minimum distractions. This year,
the symposium had three editions: US, EMEA & APAC.
The turnout was amazing, and so were the Feedbacks from the Attendees . It was very
reassuring to see such an active involvement during these trying time s of a global
pandemic - which goes to demonstrate the ever-learning nature of the SQL community.
Even though the Virtual Symposium turned out to be all that it promised, we wanted to
take it a step further. With changes in lifestyle and schedules, the co mmon man of the
present day is in a constant state of flux and adaptation.
We here at SQLServerGeeks empathise with the current situation and have taken
necessary steps to ensure that you are always a click away from a library of well -curated
content, so that you never compromise on learning.
So, if you are amongst those who, (for whatever reason) were unable to attend the
sessions of the Virtual Symposium, we bring you the opportunity to re -live the
experience, with the release of the Session Recordings & Resources - Absolutely Free!
Session Recordings | Session Resources – Become a FREE member and access it all.
So if you haven't already, please sign up for a Free Membership and get full-access to
the Symposium Recordings and all the latest content from the world of SQL that we
continue to release in the days to come. Happy Learning Folks!
Go To Recordings
Learn More