Powershell Scripting Toolmaking Sample
Powershell Scripting Toolmaking Sample
Book
Forever Edition
This is a Leanpub book. Leanpub empowers authors and publishers with the Lean Publishing
process. Lean Publishing is the act of publishing an in-progress ebook using lightweight tools and
many iterations to get reader feedback, pivot until you have the right book and build traction once
you do.
Dedication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
Foreword . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi
Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Pre-Requisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Versioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
The Journey . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Following Along . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Providing Feedback . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Verify Yourself . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
The Transcript . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Our Read-Through . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Our Answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
How’d You Do . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
CONTENTS
Proxy Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
For Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Creating the Proxy Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Modifying the Proxy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Adding or Removing Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
Your Turn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Let’s Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Part 6: Pester . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
If you purchased this book, thank you. Know that writing a book like this takes hundreds of hours
of effort, during which we’re not making any other income. Your purchase price is important to
keeping a roof over our families’ heads and food on our tables. Please treat your copy of the book as
your own personal copy - it isn’t to be uploaded anywhere, and you aren’t meant to give copies to
other people. We’ve made sure to provide a DRM-free file (excepting any DRM added by a bookseller
other than LeanPub) so that you can use your copy any way that’s convenient for you. We appreciate
your respecting our rights and not making unauthorized copies of this work.
If you got this book for free from someplace, know that you are making it difficult for us to write
books. When we can’t make even a small amount of money from our books, we’re encouraged to
stop writing them. If you find this book useful, we would greatly appreciate you purchasing a copy
from LeanPub.com or another bookseller. When you do, you’ll be letting us know that books like
this are useful to you, and that you want people like us to continue creating them.
Please note that this book is not authorized for classroom use unless a unique copy has been
purchased for each student. No-one is authorized or licensed to manually reproduce the PDF
version of this book for use in any kind of class or training environment.
¹https://leanpub.com
About This Book ii
This book is copyrighted (c)2017-2020 by Don Jones and Jeffery Hicks, and all rights are reserved.
This book is not open source, nor is it licensed under a Creative Commons license. This book is not
free, and the authors reserve all rights.
Dedication
This book is fondly dedicated to the many hardworking PowerShell users who have, for more than
a decade, invite us into their lives through our books, conference appearances, instructional videos,
live classes, and more. We’re always humbled and honored by your support and kindness, and you
inspire us to always try harder, and to do more. Thank you.
Acknowledgements
Thanks to Michael Bender, who has selflessly provided a technical review of the book. Any
remaining errors are, of course, still the authors’ fault, but Michael has been tireless in helping
us catch many of them.
About the Authors
Don Jones received the Microsoft MVP Award recipient for 16 consecutive years for his work with
Windows PowerShell and administrative automation. He has authored dozens of books, articles,
white papers, and instructional videos on information technology, and today is a Vice President
in the Content team at Pluralsight.com. Don was also a co-founder of The DevOps Collective²,
which offers IT education programs, scholarships, and which runs PowerShell.org and PowerShell
+ DevOps Global Summit³ and other DevOps- and automation-related events.
Don’s recent writing focuses on business, instructional design, self-improvement, and fiction, and
can be found at http://leanpub.com/u/donjones⁴. You can follow Don on Twitter @concentratedDon.
He blogs at DonJones.com.
Jeff Hicks is a grizzled IT veteran with almost 30 years of experience, much of it spent as an
IT infrastructure consultant specializing in Microsoft server technologies with an emphasis in
automation and efficiency. He is a multi-year recipient of the Microsoft MVP Award. He works
today as an independent author, teacher and consultant. Jeff has taught and presented on PowerShell
and the benefits of automation to IT Pros worldwide for over a decade. Jeff has authored and co-
authored a number of books, writes for numerous online sites, is a Pluralsight author, and a frequent
speaker at technology conferences and user groups world-wide.
You can keep up with Jeff on Twitter (@JeffHicks) and on his blog at https://jdhitsolutions.com⁵.
Additional Credits
Technical editing has been helpfully provided not only by our readers, but by Michael Bender. We’re
grateful to Michael for not only catching a lot of big and little problems, but for fixing most of them
for us. Michael rocks, and you should watch his Pluralsight videos. However, anything Michael
didn’t catch is still firmly the authors’ responsibility.
²https://devopscollective.org
³https://events.devopscollective.org/
⁴http://leanpub.com/u/donjones_
⁵https://jdhitsolutions.com
Foreword
After the success of Learn PowerShell in a Month of Lunches, Jeff and I wanted to write a book that
took people down the next step, into actual scripting. The result, of course, was Learn PowerShell
Toolmaking in a Month of Lunches. In the intervening years, as PowerShell gained more traction
and greater adoption, we realized that there was a lot more of the story that we wanted to tell.
We wanted to get into help authoring, unit testing, and more. We wanted to cover working with
different data sources, coding in Visual Studio, and so on. These were really out of scope for the
Month of Lunches series’ format. And even in the “main” narrative of building a proper tool, we
wanted to go into more depth. So while the Month of Lunches book was still a valuable tutorial in
our minds, we wanted something with more tooth.
At the same time, this stuff is changing really fast these days. Fast enough that a traditional
publishing process - which can add as much as four months to a book’s publication - just can’t keep
up. Not only are we kind of constantly tweaking our narrative approach to explaining these topics,
but the topics themselves are constantly evolving, thanks in part to an incredibly robust community
building add-ons like Pester, Platyps, and more.
So after some long, hard thinking, we decided to launch this effort. As an Agile-published book on
LeanPub, we can continuously add new content, update old content, fix the mistakes you point out
to us, and so on. We can then take major milestones and publish them as “snapshots” on places like
Amazon, increasing the availability of this material. We hope you find the project as exciting and
dynamic as we do, and we hope you’re generous with your suggestions - which may be sent to us
via the author contact form from this book’s page on LeanPub.com. We’ll continue to use traditional
paper publishing, but through a self-publishing outlet that doesn’t impose as much process overhead
on getting the book in print. These hard copy editions will be a “snapshot” or “milestone edition” of
the electronic version.
It’s important to know that we still think traditional books have their place. PowerShell Scripting in
a Month of Lunches, the successor to Learn PowerShell Toolmaking in a Month of Lunches, covers
the kind of long-shelf-life narrative that is great for traditionally published books. It’s an entry-level
story about the right way to create PowerShell tools, and it’s very much the predecessor to this book.
If Month of Lunches is about getting your feet under you and putting them on the right path, this
book is about refining your approach and going a good bit further on your journey.
Toolmaking, for us, is where PowerShell has always been headed. It’s the foundation of a well-de-
signed automation infrastructure, of a properly built DSC model, and of pretty much everything else
you might do with PowerShell. Toolmaking is understanding what PowerShell is, how PowerShell
wants to work, and how the world engages with PowerShell. Toolmaking is a big responsibility.
My first job out of high school was as an apprentice for the US Navy. In our first six weeks, we
rotated through various shops - electronics, mechanical, and so on - to find a trade that we thought
Foreword vii
we’d want to apprentice for. For a couple of weeks, I was in a machine shop. Imagine a big, not-
climate-controlled warehouse full of giant machines, each carving away at a piece of metal. There’s
lubrication and metal chips flying everywhere, and you wash shavings out of yourself every evening
when you go home. It was disgusting, and I hated it. It was also boring - you set a block of metal
into the machine, which might take hours to get precisely set up, and then you just sat back and
kind of watched it happen. Ugh. Needless to say, I went into the aircraft mechanic trade instead.
Anyway, in the machine shop, all the drill bits and stuff in the machine were called tools and dies.
Back in the corner of the shop, in an enclosed, climate-controlled room, sat a small number of nicely-
dressed guys in front of computers. They were using CAD software to design new tools and dies for
specific machining purposes. These were the tool makers, and I vowed that if I was ever going to
be in this hell of a workplace, I wanted to be a toolmaker and not a tool user. And that’s really the
genesis of this book’s title. All of us - including the organizations we work for - will have happier,
healthier, more comfortable lives as high-end, air-conditioned toolmakers rather than the sweaty,
soaked, shavings-filled tool users out on the shop floor.
Enjoy!
Don Jones
Introduction
Pre-Requisites
We’re assuming that you’ve already finished reading an entry-level tutorial like, Learn Windows
PowerShell in a Month of Lunches, or that you’ve got some solid PowerShell experience already
under your belt. Specifically, nothing on this list should scare you:
If you’ve already done things like written functions in PowerShell, that’s marvelous - but, you may
need to be open to un-learning some things. Some of PowerShell’s best practices and patterns aren’t
immediately obvious, and especially if you know how to code in another language, it’s easy to go
down a bad path in PowerShell. We’re going to teach you the right way to do things, but you need
to be willing to re-do some of your past work if you’ve been following the Wrong Ways.
We also assume that you’ve read PowerShell Scripting in a Month of Lunches, a book we wrote for
Manning. It provides the core narrative of “the right way to write PowerShell functions and tools,”
this book essentially picks up where that one leaves off. Look for that book in late 2017 from Manning
or your favorite bookseller. Part 1 of this book briefly slams through this “the right way” narrative
just to make sure you’ve got it in your mind, but the Month of Lunches title really digs into those
ideas in detail.
Versioning
This book is primarily written against Windows PowerShell v5/v5.1 running on Microsoft Windows.
In January 2018, Microsoft announced the General Availability of PowerShell Core 6.0, which
is a distinct cross-platform “branch” of PowerShell. This branch has now become PowerShell 7,
which was released in early 2020. As far as we can tell, everything we teach in this book applies
to PowerShell 7, too - although some of our specific examples may still only work on Windows
PowerShell, the concepts and techniques are applicable to PowerShell 7. However, PowerShell 7
includes some new scripting features which we’ll cover in a dedicated chapter or two.
Introduction ix
The Journey
This book is laid out into seven parts:
Following Along
We’ve taken some pains to provide review Q&A at the end of most chapters, and to provide lab
exercises (and example answers) at the end of many chapters. We strongly, strongly encourage you
to follow along and complete those exercises - doing is a lot more effective than just reading. And
if you get stuck, hop onto the Q&A forums on PowerShell.org and we’ll try and unstick you. We’ve
tried to design the labs so that they only need a Windows client computer - so you won’t need a
complex, multi-machine lab setup. Of course, if you have more than one computer to play with,
some of the labs can be more interesting since you can write tools that query multiple computers
and so forth. But the code’s the same even if you’re just on a single Windows client, so you’ll be
learning what you need to learn.
Providing Feedback
Finally, we hope that you’ll feel encouraged to give us feedback on this book. There’s a “Contact
the Authors” form on this book’s page⁶ on LeanPub.com, and you’re also welcome to contact
us on Twitter @concentratedDon and @JeffHicks. You can also post in the Q&A forums on
PowerShell.org, which frankly is a lot easier to respond to than Twitter. If you purchased the “Forever
Edition” of this book on LeanPub, then you’ll see us incorporating suggestions and releasing a new
build of the book all the time. If you obtained the book elsewhere, we can’t turn your purchase
into a LeanPub account for you. However, when the book changes enough for us to publish a new
“edition” to other booksellers, that might be a time to pick it up on LeanPub instead, provided you
understand the “Agile publishing” model and are comfortable with it.
⁶http://leanpub.com/powershell-scripting-toolmaking
Part 1: Review PowerShell
Toolmaking
This first Part of the book is essentially a light-speed refresher of what PowerShell Scripting in a
Month of Lunches covers. If you’ve read that book, or feel you have equivalent experience, then this
short Part will help refresh you on some core terminology and techniques. If you haven’t… well, we
really recommend you get that fundamental information under your belt first.
Functions, the Right Way
This chapter is essentially meant to be a warp-speed review of the material we presented in the core
narrative of Learn PowerShell Toolmaking in a Month of Lunches (and its successor, PowerShell
Scripting in a Month of Lunches). This material is, for us, “fundamental” in nature, meaning it
remains essentially unchanged from version to version of PowerShell. Consider this chapter a kind
of “pre-req check;” if you can blast through this, nodding all the while and going, “yup,” then you’re
good to skip to the next Part this book. If you run across something where you’re like, “wait, what?”
then a review of those foundational, prerequisite books might be in order, along with a thorough
reading of this Part of this book.
By the way, you’ll notice that our downloadable code samples for this book (the “PSTool-
making” module in PowerShell Gallery) contain the same code samples as the core “Part 2”
narrative from PowerShell Scripting in a Month of Lunches. Those code samples also align
to this book, and we use them in this chapter as illustrations.
Tool Design
We strongly advocate that you always begin building your tools by first designing them. What
inputs will they require? What logical decisions will they have to make? What information will
they output? What inputs might they consume from other tools, and how might their output be
consumed? We try to answer all of these questions - often in writing! - up front. Doing so helps us
think through the ways in which our tool will be used, by different people at different times, and to
make good decisions about how to build the tool when it comes time to code.
I need a PowerShell script that will check a complete DFS Root, and report all targets and
access based enumeration for each. I then need the script to check all NTFS permissions
on all the targets and list the security groups assigned. I then need this script to search 4
domains and report on the users in these groups.
Functions, the Right Way 3
And yup - that’s what “Start with a Command” means. We’d probably start by planning that out -
inputs are clearly some kind of DFS root name or server name, and an output path for the reports
to be written. Then the discovery process would begin: how can PowerShell connect to a DFS root?
How can it enumerate targets? How can it resolve the target physical location and query NTFS
permissions? Good ol’ Google, and past experience, would be our main tool here, and we wouldn’t
go an inch further until we had a text file full of answers, sample commands, and notes.
If you’re really reading that DFS example, we’d probably stop our function at the point
where it gets the permissions on the DFS targets. The results of that operation could be used
to unwind the users who were in the resulting groups - a procedure we’d write as a separate
tool, in all likelihood.
Functions, the Right Way 4
Comment-Based Help
We’d definitely “dress up” our code using comment-based help, if not full help (we cover that later
in the book). We’d make sure to provide usage examples, documentation for each parameter, and a
pretty detailed description about what the tool did.
Handling Errors
Finally, and again if we hadn’t habitually done so already, we’d anticipate errors and try to handle
them gracefully. “Permission Denied” querying permissions on a file? Handled - perhaps outputting
an object, for that file, indicating the error.
Here’s a tip: Read the transcript first. As you go, make notes about the things you see, and
what you’ll need to do in order to duplicate those things. Then, start coding, checking off
each thing you noted as you incorporate it into your code. The transcript file is too wide to
properly fit the page of a printed book so the slashes you see at the end of lines are really
line continuation characters.
The Transcript
Here you go:
**********************
Windows PowerShell transcript start
Start time: 20170623144152
Username: DESKTOP-7NKT52T\User
RunAs User: DESKTOP-7NKT52T\User
Machine: DESKTOP-7NKT52T (Microsoft Windows NT 10.0.14393.0)
Host Application: C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe
Process ID: 1412
PSVersion: 5.1.14393.1358
PSEdition: Desktop
PSCompatibleVersions: 1.0, 2.0, 3.0, 4.0, 5.0, 5.1.14393.1358
BuildVersion: 10.0.14393.1358
CLRVersion: 4.0.30319.42000
WSManStackVersion: 3.0
PSRemotingProtocolVersion: 2.3
SerializationVersion: 1.1.0.1
**********************
Verify Yourself 6
Supply an argument that is in the set and then try the command again.
At line:1 char:52
+ Get-XXSystemInfo -Computername localhost -Protocol x
+ ~
+ CategoryInfo : InvalidData: (:) [Get-XXSystemInfo], ParameterBindingV
alidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationError,Get-XXSystemInfo
PS C:\> Get-XXSystemInfo -Computername nope -verbose -Protocol dcom -TryOtherProtocol
VERBOSE: Attempting nope on dcom
PS C:\> TerminatingError(New-CimSession): "The running command stopped because the
preference variable "ErrorActionPreference" or common parameter is set to Stop:
The RPC server is unavailable. "
WARNING: Skipping nope due to failure to connect
VERBOSE: Attempting nope on wsman
PS C:\> TerminatingError(New-CimSession): "The running command stopped because the
preference variable "ErrorActionPreference" or common parameter is set to Stop:
The RPC server is unavailable. "
WARNING: Skipping nope due to failure to connect
PS C:\> Stop-Transcript
**********************
Windows PowerShell transcript end
End time: 20170623144314
**********************
Our Read-Through
Let’s go through that transcript, and we’ll tell you what should have been coming to mind for you
at each step.
OK, this tells us that the command name is Get-XXSysteminfo, and it has a -Computername
parameter. We don’t know if it accepts just one value, or many, at this point. We can see what
it produces, so we know we’re going to have to query two CIM/WMI classes. We don’t know what
the module name is, but we could make one up if we needed to.
Verify Yourself 8
The above tells is that [CmdletBinding()] is in use, and that Write-Verbose is used.
We now know that there are multiple protocols. Based on the verbose output above, at least Wsman
and Dcom are supported. We can anticipate adding a ValidateSet() to only allow those two values,
unless we encounter some more.
Verify Yourself 9
The forgoing suggests that we have the ability to recursively call our own function to try the other
protocol. We’ll need to build that into the error-handling routine.
Our Answer
As noted earlier, our code is in the downloadable samples, but here’s a print version for your
convenience:
Verify Yourself 10
Answer.ps1
Function Get-XXSystemInfo {
[CmdletBinding()]
param(
[Parameter(
Mandatory,
ValueFromPipeline
)]
[string[]]$Computername,
[Parameter()]
[ValidateSet('Dcom', 'Wsman')]
[string]$Protocol = 'Wsman',
[Parameter()]
[switch]$TryOtherProtocol
)
BEGIN {
If ($Protocol -eq 'Dcom') {
$cso = New-CimSessionOption -Protocol Dcom
}
else {
$cso = New-CimSessionOption -Protocol Wsman
}
}
PROCESS {
ForEach ($comp in $computername) {
Try {
Write-Verbose "Attempting $comp on $protocol"
$s = New-CimSession -ComputerName $comp -SessionOption $cso -EA Stop
if ($TryOtherProtocol) {
If ($Protocol -eq 'Dcom') {
Get-XXSystemInfo -Protocol Wsman -Computername $comp
}
else {
Get-XXSystemInfo -Protocol Dcom -Computername $comp
}
}
} #Catch
} #ForEach
} #PROCESS
END {}
}
How’d You Do
If you were able to spot all of the major elements, and construct something at least vaguely like
our solution, then we think you’re probably “good to go” in terms of this book. If not, check out
PowerShell Scripting in a Month of Lunches from Manning.com, and thoroughly re-read Part 1 of
this book, to bring yourself up to speed.
We can’t stress that enough: if you’re not up to speed at this point, then you’re not ready to proceed
further in this book.
Part 2: Professional-Grade
Toolmaking
In this Part, we’re going to try and take your toolmaking skills a bit further. This is the stuff that sets
the beginners apart from the real pros. We’ve constructed these chapters into a kind of storyline, so
each one builds on what the previous ones taught. That said, the storyline here isn’t tightly coupled,
so feel free to dive in to whatever chapter seems of most interest or use to you. Because we’re
moving into Toolmaking areas that are more optional and as-you-need, you won’t see “Your Turn”
lab elements in every chapter - but that doesn’t mean you shouldn’t try and play along! Just follow
along with your own code. However, when we include a “Your Turn” section, we obviously strongly
suggest you follow along with that “lab.”
Publishing Your Tools
Inevitably, you’ll come to point where you’re ready to share your tools. Hopefully, you’ve put those
into a PowerShell module, as we’ve been advocating throughout this book, because in most cases
it’s a module that you’ll share.
#
# Module manifest for module 'PowerShell-Toolmaking'
#
# Generated by: Don Jones & Jeffery Hicks
#
@{
# Supported PSEditions
CompatiblePSEditions = @("Desktop")
# Minimum version of Microsoft .NET Framework required by this module. This prerequi\
site is valid for the PowerShell Desktop edition only.
# DotNetFrameworkVersion = ''
# Minimum version of the common language runtime (CLR) required by this module. This\
prerequisite is valid for the PowerShell Desktop edition only.
# CLRVersion = ''
# Modules that must be imported into the global environment prior to importing this \
module
# RequiredModules = @()
# Script files (.ps1) that are run in the caller's environment prior to importing th\
is module.
# ScriptsToProcess = @()
# Functions to export from this module, for best performance, do not use wildcards a\
nd do not delete the entry, use an empty array if there are no functions to export.
FunctionsToExport = '*'
# Cmdlets to export from this module, for best performance, do not use wildcards and\
do not delete the entry, use an empty array if there are no cmdlets to export.
CmdletsToExport = '*'
# Aliases to export from this module, for best performance, do not use wildcards and\
do not delete the entry, use an empty array if there are no aliases to export.
AliasesToExport = '*'
PSData = @{
# Tags applied to this module. These help with module discovery in online ga\
lleries.
# Tags = @()
# Default prefix for commands exported from this module. Override the default prefix\
using Import-Module -Prefix.
# DefaultCommandPrefix = ''
A lot of this is commented out, which is the default when you use New-ModuleManifest. The specifics
you must provide will differ based on your repository’s requirements, but in general we recommend
at least the following be completed:
• RootModule. This is actually mandatory for the .psd1 to work, and it should point to the “main”
.psm1 file of your module.
• ModuleVersion. This is generally mandatory, too, and is at the very least a very good idea.
• GUID. This is mandatory, and generated automatically by New-ModuleManifest.
• Author.
• Description.
Take note of your author name and try to be consistent. You want to make it easy for people
to find the other amazing tools you have published.
These are, incidentally, the minimums for publishing to PowerShell Gallery. we also recommend, in
the strongest possible terms, that you specify the FunctionsToExport array, as well as VariablesTo-
Export, CmdletsToExport, and AliasesToExport if those are applicable. Ours, as you’ll see above, are
set to *, which is a bad idea. In our specific example here, it makes sense, because our root module is
actually empty - we aren’t exporting anything; the module is just a kind of container for our sample
Publishing Your Tools 17
code to live in. But in your case, the recommended best practice is to explicitly list function, alias
and variable (without the $ sign) names which will achieve two benefits:
• Auto-discovery of your commands will be faster, since PowerShell can just read the .psd1 rather
than parsing the entire .psm1.
• Some repositories may be able to provide per-command search capabilities if you specify which
commands your module offers.
You may be prompted for additional information if it can’t be found in your module manifest.
Be aware that publishing a module will include all files and folders in your module location.
Hidden files and folders should be ignored but make sure you have cleaned up any scratch,
test or working files.
You’ll likely receive a confirmation email from the Gallery, which may include a number of
PSScriptAnalyzer notifications. As we describe in the chapter on Analyzing Your Script, the Gallery
automatically runs a number of PSScriptAnalyzer best practices rules on all submitted code, and
you should try hard to confirm with these unless you’ve a specific reason not to.
So what’s appropriate for PowerShell Gallery publication?
• Production-ready code. Don’t submit untested, pre-release code unless you’re doing so as part
of a public beta test, and be sure to clearly indicate that the code isn’t production-ready (for
example, using Write-Warning to display a message when the module is loaded).
• Open-source code. Gallery code is, by implication, open-source; you should consider hosting
your development code in a public OSS repository like GitHub, and only publish “released”
code to the Gallery. Be sure not to include any proprietary information.
Publishing Your Tools 18
• Useful code. There are like thirty seven million 7Zip modules in the Gallery. More are likely
not needed. Try to publish code that provides unique value.
Items can be removed from Gallery if you change your mind, but Microsoft doesn’t have the ability
to go out and delete whatever people may have already downloaded. Bear that in mind before
contributing.
Don’t put any other files in this folder other than what you publish otherwise you will get
errors when using Find-Module.
We set the repository to be trusted because we know what is going in and we don’t want to be
bothered later when we try to install from it. If you forget, you can modify the repository later:
This local repository can be used just like the PowerShell gallery.
Publishing Your Tools 19
We set this up locally as a proof of concept. It shouldn’t take that much more work to setup a
repository on a company file share. Just mind your permissions.
Your Turn
We aren’t going to offer a real hands-on lab in this chapter, mainly because we think it’s a bad idea
to use a public repo like PowerShell Gallery as a “lab environment!” It’s also non-trivial to set up
your own private repository, and if you go through that trouble, we think you’ll want it to be in
production, not in a lab, so that you can benefit from that work.
That said, we do want to encourage you to sign into the PowerShell Gallery and create your API
key, as we’ve described doing in this chapter. It’s a first step toward getting ready to publish your
own code.
Let’s Review
We aren’t going to ask you publish anything to the gallery. You may never have a need to publish
or share your work. But let’s see if you picked up anything in this chapter.
Review Answers
Hopefully you came up with answers like this:
1. Nuget
2. A module manifest.
3. Any unique project that offers value and is production ready. You can publish your project
that might be in beta or under development but that should be made clear to any potential
consumer such as through version numbering.
Part 3: Controller Scripts and
Delegated Administration
With your tools constructed and tested, it’s time to put them to work - and that means writing
controller scripts. Not sure what that means? Keep reading. We’ll look at several kinds, from simple
to complex, and look at a couple of other, unique ways in which you can put your tools to work.
Proxy Functions
In PowerShell, a proxy function is a specific kind of wrapper function. That is, it “wraps” around an
existing command, usually with the intent of either:
• Removing functionality
• Hard-coding functionality and removing access to it
• Adding functionality
In some cases, a proxy command is meant to “replace” an existing command. This is done by giving
the proxy the same name as the command it wraps; since the proxy gets loaded into the shell last,
it’s the one that actually gets run when you run the command name.
There’s a way, using a fully-qualified command name, to regain access to the wrapped
command, so proxy functions shouldn’t be seen as security mechanism. They’re more of
a functional convenience.
For Example
You’re probably familiar with PowerShell’s ConvertTo-HTML command. We’d like to make a version
that “replaces” the existing command, providing full access to it but always injecting a particular
CSS style sheet, so that the resulting HTML can be a bit prettier.
Here’s the rather-lengthy result (once again, apologies for the backslashes, which represent line-
wrapping; it’s unavoidable in this instance, but the downloadable sample code won’t show them):
Proxy Functions 23
[CmdletBinding(DefaultParameterSetName='Page',
HelpUri='http://go.microsoft.com/fwlink/?LinkID=113290',
RemotingCapability='None')]
param(
[Parameter(ValueFromPipeline=$true)]
[psobject]
${InputObject},
[Parameter(Position=0)]
[System.Object[]]
${Property},
[Parameter(ParameterSetName='Page', Position=3)]
[string[]]
${Body},
[Parameter(ParameterSetName='Page', Position=1)]
[string[]]
${Head},
[Parameter(ParameterSetName='Page', Position=2)]
[ValidateNotNullOrEmpty()]
[string]
${Title},
[ValidateNotNullOrEmpty()]
[ValidateSet('Table','List')]
[string]
${As},
[Parameter(ParameterSetName='Page')]
[Alias('cu','uri')]
[ValidateNotNullOrEmpty()]
[uri]
${CssUri},
[Parameter(ParameterSetName='Fragment')]
[ValidateNotNullOrEmpty()]
[switch]
${Fragment},
[ValidateNotNullOrEmpty()]
[string[]]
Proxy Functions 24
${PostContent},
[ValidateNotNullOrEmpty()]
[string[]]
${PreContent})
begin
{
try {
$outBuffer = $null
if ($PSBoundParameters.TryGetValue('OutBuffer', [ref]$outBuffer))
{
$PSBoundParameters['OutBuffer'] = 1
}
$wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('Microsoft.PowerShe\
ll.Utility\ConvertTo-Html',
[System.Management.Automation.CommandTypes]::Cmdlet)
$scriptCmd = {& $wrappedCmd @PSBoundParameters }
$steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOr\
igin)
$steppablePipeline.Begin($PSCmdlet)
} catch {
throw
}
}
process
{
try {
$steppablePipeline.Process($_)
} catch {
throw
}
}
end
{
try {
$steppablePipeline.End()
} catch {
throw
}
}
Proxy Functions 25
<#
.ForwardHelpTargetName Microsoft.PowerShell.Utility\ConvertTo-Html
.ForwardHelpCategory Cmdlet
#>
This isn’t wrapped in a function, so that’s the first thing we’ll do in the next step (which we’ll put
into a file in Step2, so you can differentiate).
function NewConvertTo-HTML {
[CmdletBinding(DefaultParameterSetName='Page',
HelpUri='http://go.microsoft.com/fwlink/?LinkID=113290',
RemotingCapability='None')]
param(
[Parameter(ValueFromPipeline=$true)]
[psobject]
${InputObject},
[Parameter(Position=0)]
[System.Object[]]
${Property},
[Parameter(ParameterSetName='Page', Position=3)]
[string[]]
${Body},
[Parameter(ParameterSetName='Page', Position=1)]
[string[]]
${Head},
[Parameter(ParameterSetName='Page', Position=2)]
[ValidateNotNullOrEmpty()]
[string]
Proxy Functions 26
${Title},
[ValidateNotNullOrEmpty()]
[ValidateSet('Table','List')]
[string]
${As},
[Parameter(ParameterSetName='Page')]
[Alias('cu','uri')]
[ValidateNotNullOrEmpty()]
[uri]
${CssUri},
[Parameter(ParameterSetName='Fragment')]
[ValidateNotNullOrEmpty()]
[switch]
${Fragment},
[ValidateNotNullOrEmpty()]
[string[]]
${PostContent},
[ValidateNotNullOrEmpty()]
[string[]]
${PreContent})
begin
{
try {
$outBuffer = $null
if ($PSBoundParameters.TryGetValue('OutBuffer', [ref]$outBuffer))
{
$PSBoundParameters['OutBuffer'] = 1
}
$wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('Microsoft.PowerShe\
ll.Utility\ConvertTo-Html',
[System.Management.Automation.CommandTypes]::Cmdlet)
</style>
'@
process
{
try {
$steppablePipeline.Process($_)
} catch {
throw
}
}
end
{
try {
$steppablePipeline.End()
} catch {
throw
}
}
<#
.ForwardHelpTargetName Microsoft.PowerShell.Utility\ConvertTo-Html
.ForwardHelpCategory Cmdlet
#>
}
Proxy Functions 28
Our changes begin at around line 63, with the #create our css comment. Under that, we check to
see if -head had been specified; if it was, we append our CSS to it. If not, we add a “head” parameter
to $PSBoundParameters. Then we let the proxy function continue just as normal.
You may want to clean up references to the original version by deleting the HelpUri link in
cmdletbinding as well as the forwarded help link at the end. Or if you have created your
own help documentation you can delete the forward links altogether.
Adding a Parameter
Adding a parameter is as easy as declaring it in your proxy function’s Param() block. Add whatever
attributes you like, and you’re good to go. You just want to remove the added parameter from
$PSBoundParameters before the underlying command executes, since that command won’t know
what to do with your new parameter.
$PSBoundParameters.Remove('MyNewParam')
$scriptCmd = {& $wrappedCmd @PSBoundParameters }
Just remove it before that $scriptCmd line, and you’re good to go.
Removing a Parameter
This is even easier - just delete the parameter from the Param() block! If you’re removing a parameter
that’s mandatory, you’ll need to internally provide a value with it. For example:
$PSBoundParameters += @{'RemovedParam'=$MyValue}
$scriptCmd = {& $wrappedCmd @PSBoundParameters }
This will re-connect the -RemovedParam parameter, feeding it whatever’s in $MyValue, before
running the underlying command.
Proxy Functions 29
Your Turn
Now it’s your turn to create a proxy function.
Start Here
In this exercise, you’ll be extending the Export-CSV command. However, you’re not going to
“overwrite” the existing command. Instead, you’ll be creating a new command that uses Export-CSV
under the hood.
Your Task
Create a proxy function named Export-TDF. This should be a wrapper around Export-CSV, and
should not include a -Delimiter parameter. Instead, it should hard-code the delimiter to be a tab.
Hint: you can specify a tab by putting a backtick, followed by the letter “t,” inside double quotes.
Our Take
Here’s what we came up with - also in the lab-results folder in the downloadable code.
function Export-TDF {
[CmdletBinding(DefaultParameterSetName='Delimiter',
SupportsShouldProcess=$true,
ConfirmImpact='Medium',
HelpUri='http://go.microsoft.com/fwlink/?LinkID=113299')]
param(
[Parameter(
Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true
)]
[psobject]$InputObject,
[Parameter(Position=0)]
[ValidateNotNullOrEmpty()]
[string]$Path,
[Alias('PSPath')]
[ValidateNotNullOrEmpty()]
[string]$LiteralPath,
Proxy Functions 30
[switch]$Force,
[Alias('NoOverwrite')]
[switch]$NoClobber,
[ValidateSet('Unicode','UTF7','UTF8','ASCII','UTF32',
'BigEndianUnicode','Default','OEM')]
[string]$Encoding,
[switch]$Append,
[Parameter(ParameterSetName='UseCulture')]
[switch]$UseCulture,
[Alias('NTI')]
[switch]$NoTypeInformation
)
begin
{
try {
$outBuffer = $null
if ($PSBoundParameters.TryGetValue('OutBuffer', [ref]$outBuffer))
{
$PSBoundParameters['OutBuffer'] = 1
}
$wrappedCmd = $ExecutionContext.InvokeCommand.GetCommand('Microsoft.PowerShe\
ll.Utility\Export-Csv',
[System.Management.Automation.CommandTypes]::Cmdlet)
$PSBoundParameters += @{'Delimiter'="`t"}
$scriptCmd = {& $wrappedCmd @PSBoundParameters }
$steppablePipeline = $scriptCmd.GetSteppablePipeline($myInvocation.CommandOr\
igin)
$steppablePipeline.Begin($PSCmdlet)
} catch {
throw
}
}
process
{
try {
$steppablePipeline.Process($_)
Proxy Functions 31
} catch {
throw
}
}
end
{
try {
$steppablePipeline.End()
} catch {
throw
}
}
} #close function
We really just removed one parameter definition and added one line of code to hard-code the
delimiter. We removed the {} around the parameter names and lined things up in the Param() block
the way we would normally write code. We also removed the forwarded help links. We would still
need to create new comment based help for this command. Probably copying a lot of the help from
the original command.
Once you understand the concepts, you can use Jeff’s Copy-Command⁷ function from the
PSScriptTools module.
Let’s Review
See if you can answer a couple of questions on proxy functions:
Review Answers
Here are our answers:
⁷https://github.com/jdhitsolutions/PSScriptTools/blob/master/docs/Copy-Command.md
Proxy Functions 32
• SQL Server is easy to use from PowerShell code. Literally a handful of lines, and you’re done.
• SQL Server Reporting Services (also free in the Express edition) can turn SQL Server data into
gorgeous reports with charts and graphs - and can automate the production and delivery of
those reports with zero effort from you.
• SQL Server is something that many computers can connect to at once, meaning you can write
scripts that run on servers, letting those servers update their own data in SQL Server. This is
faster than a script which reaches out to query many servers in order to update a spreadsheet.
We don’t know how to better evangelize using SQL Server for data storage over Microsoft Excel.
• SQL Server is a service that runs on a server. Part of what you’ll need to know, to use it, is the
server’s name. A single machine (physical or VM) can run multiple instances of SQL Server,
so if you need to connect to an instance other than the default, you’ll need the instance name
also. The naming pattern is SERVER\INSTANCE.
• A SQL Server instance can host one or more databases. You will usually want a database
for each major data storage purpose. For example, you might ask a DBA to create an
“Administration Data” database on one of your SQL Server computers, giving you a place
to store stuff.
• Databases have a recovery mode option. Without getting into a lot of details, you can use the
“Simple” recovery mode (configurable in SQL Server Management Studio by right-clicking the
database and accessing its Options page) if your data isn’t irreplaceable and you don’t want to
mess with maintaining the database’s log files. For anything more complex, either take a DBA
to lunch, or read Don’s Learn SQL Server Administration in a Month of Lunches.
• Databases contain tables, each of which is analogous to an Excel worksheet.
Working with SQL Server Data 35
• Tables consist of rows (entities) and columns (fields), which correspond to the rows and columns
of an Excel sheet.
• Columns have a data type, which determines the kind of data they can store, like text
(NVARCHAR), dates (DATETIME), or numbers (INT). The data type also determines the data
ranges. For example, NVARCHAR(10) can hold 10 characters; NVARCHAR(MAX) has no limit.
INT can store smaller values than BIGINT, and bigger values than TINYINT.
• SQL Server defaults to Windows Authentication mode, which means the domain user account
running your scripts must have permission to connect to the server (a login), and permission
to use your database (a database user). This is the safest means of authentication as it doesn’t
require passwords to be kept in your script. If running a script as a scheduled task, the task can
be set to “run as” a domain user with the necessary permissions.
Even if you are the only person who will ever interact with stored data, you are still better
off installing SQL Server Express (did we mention it is free?) instead of relying on Excel.
You can leave the connection open for your entire script; be sure to run $conn.Close() when you’re
done, though. It’s not a tragedy to not close the connection; when your script ends, the connection
object will vanish, and SQL Server will automatically close the connection a bit later. But if you’re
using a server that’s pretty busy, the DBA is going to get in your face about leaving the connection
⁸http://connectionstrings.com
Working with SQL Server Data 36
open. And, if you run your script multiple times in a short period of time, you’ll create a new
connection each time rather than re-using the same one. The DBAs will definitely notice this and
get agitated.
You do not need to have any SQL Server software installed locally for these steps as they
are relying on out-of-the-box bits from the .NET Framework. And even if you are working
with a local SQL installation, you should still follow SQL Server best practices.
Writing a Query
The next thing you need to do is retrieve, insert, update, or remove some data. This is done by writing
queries in the Transact-SQL (T-SQL) language, which corresponds with the ANSI SQL standard,
meaning most queries look basically the same on most database servers. There’s a great free online
SQL tutorial⁹ if you need one, but we’ll get you started with the basics.
To do this, you’ll need to know the table and column names from your database. SQL Server
Management Studio is a good way to discover these.
For the following sections, we’re going to focus on query syntax, and then give you an example of
how we might build that query in PowerShell. Once your query is in a variable, it’s easy enough
to run it - and we’ll cover how to do that in a bit. Also, we’re not going to be providing exhaustive
coverage of SQL syntax; we’re covering the basics. There are plenty of resources, including the
aforementioned online tutorial, if you need to dig deeper.
Adding Data
Adding data is done by using an INSERT query. The basic syntax looks like this:
So you’ll need to know the name of the table you’re adding data to, and you’ll need to know the
column names. You also need to know a bit about how the table was defined. For example, if a
“Name” column is marked as mandatory (or “NOT NULL”) in the table design, then you must list
that column and provide a value for it. Sometimes, a table may define a default value for a column,
in which case you can leave the column out if you’re okay with that default value. Similarly, a table
can permit a given column to be empty (NULL), and you can omit that column from your list if you
don’t want to provide a value.
⁹http://www.w3schools.com/sql/
Working with SQL Server Data 37
Whatever order you list the columns in, your values must be in the same order. You’re not forced to
use the column order that the table defines; you can list them in any order.
Numeric values aren’t delimited in T-SQL. String values are delimited in single quotes; any single
quotes within a string value (like “O’Leary”) must be doubled (“O’‘Leary”) or your query will fail.
Dates are treated as strings, and are delimited with single quotes.
It’s dangerous to build queries from user-entered data. Doing so opens your code to a kind
of attack called SQL Injection. We’re assuming that you plan to retrieve things like system
data, which shouldn’t be nefarious, rather than accepting input from users. The safer way
to deal with user-entered data is to create a stored procedure to enter the data, but that’s
well beyond the scope of this book.
$ComputerName = "SERVER2"
$OSVersion = "Win2012R2"
$query = "INSERT INTO OSVersion (ComputerName,OS) VALUES('$ComputerName','$OSVersion\
')"
This assumes a table named OSVersion, with columns named ComputerName and OS. Notice that
we’ve put the entire query into double quotes, allowing us to just drop variables into the VALUES list.
We always put our query in a variable, because that makes it easy to output the query text
by using Write-Verbose. That’s a great way to debug queries that aren’t working, since you
get to see the actual query text with all the variables “filled-in.”
Removing Data
A DELETE query is used to delete rows from a table, and it is almost always accompanied by a WHERE
clause so that you don’t delete all the rows. Be really careful, as there’s no such thing as an “UNDO”
query!
So, suppose we’re getting ready to insert a new row into our table, which will list the OS version of
a given computer. We don’t know if that computer is already listed in the table, so we’re going to
just delete any existing rows before adding our new one. Our DELETE query might look like this:
Working with SQL Server Data 38
There’s no error generated if you attempt to delete rows that don’t exist.
Changing Data
An UPDATE query is used to change an existing row, and is accompanied by a SET clause with the
changes, and a WHERE clause to identify the rows you want to change.
UPDATE <tablename>
SET <column> = <value>, <column> = <value>
WHERE <criteria>
For example:
We’d ordinarily do that all on one line; we’ve broken it up here just to make it fit more easily in the
book. This assumes that $freespace contains a numeric figure, and that $ComputerName contains a
computer name.
Retrieving Data
Finally, the big daddy of queries, the SELECT query. This is the only one that returns data (although
the other three will return the number of rows they affected). This is also the most complex query
in the language, so we’re really only tackling the basics.
SELECT <column>,<column>
FROM <tablename>
WHERE <criteria>
ORDER BY <column>
Working with SQL Server Data 39
The WHERE and ORDER BY clauses are optional, and we’ll come to them in a moment.
Beginning with the core SELECT, you follow with a list of columns you want to retrieve. While the
language permits you to use * to return all columns, this is a poor practice. For one, it performs
slower than a column list. For another, it makes your code harder to read. So stick with listing the
columns you want.
The FROM clause lists the table name. This can get a ton more complex if you start doing multi-table
joins, but we’re not getting into that in this book.
A WHERE clause can be used to limit the number of rows returned, and an ORDER BY clause can be used
to sort the results on a given column. Sorting is ascending by default, or you can specify descending.
For example:
You list each column name, and for each, provide a datatype. In SQL Server, you’ll commonly use:
You want to use the smallest data type possible to store the data you anticipate putting into the table,
because oversized columns can cause a lot of wasted disk space.
Working with SQL Server Data 40
Running a Query
You’ve got two potential types of queries: ones that return data (SELECT) and ones that don’t (pretty
much everything else). Running them starts the same:
This assumes $conn is an open connection object, and that $query has your T-SQL query. How you
run the command depends on your query. For queries that don’t return results:
$command.ExecuteNonQuery()
That can produce a return object, which you can pipe to Out-Null if you don’t want to see it. For
queries that produce results:
$reader = $command.ExecuteReader()
This generates a DataReader object, which gives you access to your queried data. The trick with
these is that they’re forward-only, meaning you can read a row, and then move on to the next row
- but you can’t go back to read a previous row. Think of it as an Excel spreadsheet, in a way. Your
cursor starts on the first row of data, and you can see all the columns. When you press the down
arrow, your cursor moves down a row, and you can only see that row. You can’t ever press up arrow,
though - you can only keep going down the rows.
You’ll usually read through the rows using a While loop:
while ($reader.read()) {
#do something with the data
}
The Read() method will advance to the next row (you actually start “above” the first row, so
executing Read() the first time doesn’t “skip” any data), and return True if there’s a row after that.
To retrieve a column, inside the While loop, you run GetValue(), and provide the column ordinal
number of the column you want. This is why it’s such a good idea to explicitly list your columns
in your SELECT query; you’ll know which column is in what position. The first column you listed in
your query will be 0, the one after that 1, and so on.
So here’s a full-fledged example:
Working with SQL Server Data 41
while ($reader.read()) {
[pscustomobject]@{'ComputerName' = $reader.GetValue(0)
'DiskSpace' = $reader.GetValue(1)
'DateTaken' = $reader.GetValue(2)
}
}
$conn.Close()
This snippet will produce objects, one object for each row in the table, and with each object having
three properties that correspond to three of the table columns.
If by chance you don’t remember your column positions, you can use something like this to auto-
discover the column number.
while ($reader.read()) {
[pscustomobject]@{
'ComputerName' = $reader.GetValue($reader.getordinal("computername"))
'DiskSpace' = $reader.GetValue($reader.getordinal("diskspace"))
'DateTaken' = $reader.GetValue($reader.getordinal("datetaken"))
}
}
Regardless of the approach we’d usually wrap this in a Get- function, so that we could just run the
function and get objects as output. Or a corresponding Set-, Update- or Remove- function depending
on your SQL query.
Invoke-Sqlcmd
If you by chance have installed a local instance of SQL Server Express, you will also have a set of
SQL-related PowerShell commands and a SQLSERVER PSDrive. We aren’t going to cover them as
this isn’t a SQL Server book. But you will want to take advantage of Invoke-Sqlcmd.
Working with SQL Server Data 42
Instead of dealing with the .NET Framework to create a connection, command and query, you can
simply invoke the query.
You can use any of the query types we’ve shown you in this chapter. One potential downside to this
approach in your toolmaking is that obviously this will only work locally, or where the SQL Server
modules have been installed. And there is a bit of a lag while the module is initially loaded.
Let’s Review
Because we don’t want to assume that you have access to a SQL Server computer, we aren’t going
to present a hands-on experience in this chapter. However, we do encourage you to try and answer
these questions:
Review Answers
Here are our answers:
¹⁰https://www.gitbook.com/book/devopscollective/ditch-excel-making-historical-trend-reports-in-po
Part 5: Seriously Advanced
Toolmaking
In this Part of the book, we’ll dive into some deep, “extra” topics. These are all things we’re pretty
sure you should know, but that you might not use right away, especially if you are an apprentice
toolmaker. This Part isn’t constructed in a storyline, so you can just pick and choose the bits you
think you’ll need or you find interesting.
Measuring Tool Performance
We PowerShell geeks will often get into late-night, at-the-pub arguments about which bits of Pow-
erShell perform best under certain circumstances. You’ll hear arguments like, “the ForEach-Object
cmdlet is slower because its script block has to be parsed each time” or, “storing all those objects in
a variable will make everything take longer because of how arrays are managed.” At the end of the
day, if performance is important to you, this is the chapter for you.
Is Performance Important
Well, maybe. Why is performance important to you? Look, if you’ve written a command that will
have to reboot a dozen computers, then we’re going to be splitting hairs all night about which way
is faster or slower. It won’t matter. But if you’re writing code that needs to manipulate thousands
of objects, or tens of thousands or more, then a minute performance gain per-object will add up
quickly. The point is, before you sweat this stuff, know that tweaking PowerShell for millisecond
performance gains isn’t useful unless there are a lot of milliseconds to be saved.
Test.ps1
This basically does the same thing in different ways. Let’s run that to see what happens:
Round 1
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 148
Ticks : 1486572
TotalDays : 1.72056944444444E-06
TotalHours : 4.12936666666667E-05
TotalMinutes : 0.00247762
TotalSeconds : 0.1486572
TotalMilliseconds : 148.6572
Round 2
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 37
Measuring Tool Performance 46
Ticks : 379826
TotalDays : 4.39613425925926E-07
TotalHours : 1.05507222222222E-05
TotalMinutes : 0.000633043333333333
TotalSeconds : 0.0379826
TotalMilliseconds : 37.9826
Round 3
Days : 0
Hours : 0
Minutes : 0
Seconds : 0
Milliseconds : 38
Ticks : 389199
TotalDays : 4.50461805555556E-07
TotalHours : 1.08110833333333E-05
TotalMinutes : 0.000648665
TotalSeconds : 0.0389199
TotalMilliseconds : 38.9199
There’s a significant penalty, time-wise, for the first method, while the second two are almost tied.
Neat, right?
Be Careful!
The thing to remember is that whatever you’re measuring will actually run and will actually
do stuff. This isn’t a “safe test mode” or something. So you may need to modify your script
a bit, so that you can test it without actually performing the task at hand. Of course, that
can backfire, too. You can imagine that a tool designed to modify Active Directory might
run a lot faster if it wasn’t actually communicating with Active Directory, and so your
measurement wouldn’t really be real-world or useful.
One thing to watch for when running Measure-Command is that a single test isn’t necessarily absolute
proof. There could be any number of factors that might influence the result. Sometimes it helps to
run the test several times. Jeff wrote a command Test-Expression in the PSScriptTools module that
allows you to run a test multiple times, giving you (hopefully) a more meaningful result. There’s
even a GUI version.
Collections and arrays can get really slow if they get really big and you keep adding objects to them
one at a time. This slowdown has to do with how .NET allocates and manages memory for these
things.
Anything storing a lot of data in memory can get a slowdown if .NET has to stop and garbage-collect
variables that are no longer referenced. Generally, you want to try and manage reasonable amounts
of data in-memory, not great huge wodges of 60GB text files.
Compiling script blocks - as ForEach-Object requires - can incur a performance penalty. It’s not
always avoidable, but it isn’t the fastest operation on the planet in some cases.
Wasting memory can result in disk paging, which can slow things down. For example, in the below
fragment, we’re still storing a potentially and unnecessarily huge list of users in $users long past the
point where we’re done with it.
It’d be better do to this entirely without variables, and getting the filtering happening on the domain
controller:
Now we’re getting massively less data back from Active Directory, and storing none of it in persistent
variables. Or to put it more precisely, this is an example of the benefits filtering early.
Here’s the problem - We often see beginners write a command like this:
This may not seem like a big deal but imagine the CIM command was going to return 1000 objects.
With the approach we just showed, the first command has to complete and send all 1000 objects, in
this case across the wire and then the results are figured. Compared to letting Get-CimInstance do
the filtering in place - on the server - and then only sending the filtered results back.
There’s one other feature you should take advantage of when using Get-CimInstance and not many
people do. Let’s say you are using code like this:
Measuring Tool Performance 48
The $Computers variable is a list of computernames. This pattern is pretty common. Get something
and then select the things that matter to you. However, the remote server is assembling a complete
Win32_Server instance with all the properties. But you are throwing most of them away. The better
approach is to limit what Get-CimInstance will send back.
$cimParams=@{
Classname = 'win32_service'
ComputerName = $computers
Filter = "state = 'running'"
Property = 'Name','StartMode','StartName','ProcessID','SystemName'
}
Get-CimInstance @cimParams | Select-Object -property Name,StartMode,StartName,
ProcessID,@{Name="Computername";Expression={$_.SystemName}}
PowerShell will want to display the results using its default formatting. You’ll most likely use
Select-Object or create your own custom object. Regardless, this approach should run slightly
faster. It may be small, but it can add up. You’ll really appreciate this when you are querying 500
servers.
Always look for ways to limit or filter as early in your command as possible. Take advantage
of parameters like Filter, Include, Exclude, ID and Name.
Key Take-Away
You should get used to using Measure-Command to testing your code, especially if there are several
ways you could go. We’ll look at other performance related concepts in the Scripting at Scale chapter.
But for now your key take-away should be that good coding practices can go a long way toward
avoiding performance problems!
Part 6: Pester
We provided an introduction to Pester earlier in this book, but now we’d like to really dig deep.
Pester is a pretty important part of the PowerShell universe these days, and if you’re going to be a
professional-grade PowerShell toolmaker, you should make Pester a big part of your world.
Why Pester Matters
In the world of DevOps and automation, it’s crucial that your code - you know, the thing that enables
your automation - be reliable. In the past, you’d accomplish reliability, or attempt to, by manually
testing your code. The problems with manual testing are legion:
• You’re likely to be inconsistent. That is, you might forget to test some things some times, which
opens the door to devastating bugs.
• You’re going to spend a lot of time, if you’re doing it right (and the time commitment is what
makes most people not “do it right” in the first place).
• You end up wasting time setting up “test harnesses” to safely test your code, amongst other
“supporting” tasks.
This is where Pester comes in. Simply put, it’s a testing automation tool for PowerShell code, as we
explained earlier in this book.
• Pester is consistent. It tests the same things, every time, so you never “miss” anything. And, if
you discover a new bug that you weren’t testing for, you can add to your automated tests to
make sure that bug never “sneaks by” again.
• Pester can be automated, so it takes none of your time to perform tests.
• Pester integrates well with continual integration tools, like Visual Studio Team Services (VSTS),
Jenkins, Team City, and so on, so that spinning up test environments and running tests can also
be completely automated.
1. You check in your latest PowerShell code to a code repository, like Git or VSTS. That code
includes Pester tests.
2. A miracle occurs.
3. Your tested code is either rejected due to failed tests (and you’re notified), or your code
appears in a production repository, such as a NuGet repository where it can be deployed via
PowerShellGet.
The “miracle” here is some kind of automated workflow. VSTS, for example, might spin up a test
environment, load your code into it, and run your Pester tests against your code. We’re not going to
cover how to make the miracle work, as it’s not really a PowerShell thing per se, and because there
are so many combinations of options you could choose. We are going to focus on how to write those
Pester tests, though.
Why Pester Matters 51
The big thing here is that you need to be writing testable code, a concept we’ll devote a specific
chapter to. But if you’re looking for the short answer on, “what is testable code?” It’s basically
“follow the advice we’ve been giving you in this book.” Write concise, self-contained, single-task
functions to do everything.
The other thing you’ll want to quickly embrace is to write your Pester tests immediately, if not
actually in advance of your code (something we’ll discuss more in the chapter on test-driven
development). This is going to require an act of will for most PowerShell folks, because we tend
to want to just dive in and start experimenting, rather than worrying about writing tests. But the
difference between the adults and the babies, here, is that the adults do the right thing because they
know it’s the right thing to do. Having tests available from the outset of your project is how you
reap the advantages of Pester, and indeed of PowerShell more generally.
So that’s why Pester is important. We don’t think anyone should really write any code unless they’re
also going to write automated tests for it.
It’s also important to understand what Pester really does, and this gets a bit squishy. First, it’s worth
considering the different kinds of testing you might want to perform in your life. Here are few, but
by no means all:
• Unit testing is really just making sure your code runs. You want to make sure it behaves
properly when passed various combinations of parameters, for example, and that its internal
logic behaves as expected. You usually try to test the code in isolation, meaning you prevent it
from making any permanent changes to systems, databases, and so on. You’re also just testing
your code, not anybody else’s. If you code internally runs Get-CimInstance, then you actually
prevent it from doing so, since Get-CimInstance isn’t your code. Unit testing is what Pester
is all about, and it contains functionality to help you achieve all of the above. The idea is to
isolate your code as much as possible to make testing more practical, and to make debugging
easier.
• Integration testing is a bit more far-reaching. It’s designed to test your code running in
conjunction with whatever other code is involved. This is where you’d go ahead and let internal
Get-CimInstance calls run correctly, so make sure your code operates will when integrated
with other code. Integration testing is more “for real” than unit testing, and typically runs in
something close to a production environment, rather than in isolation.
• Infrastructure validation can be thought of as an extension to integration testing. Because so
much of our PowerShell code is about modifying computer systems, such as building VMs or
deploying software, infrastructure validation runs our code, and then reaches out to check the
results. Pester can also be used for this kind of testing, and we’ll get into it more later in this
book.
All of this is important to understand, because it helps you better understand what a Pester test
looks like. If you’ve written a function that’s little more than a wrapper around ConvertTo-HTML,
for example, then your Pester tests aren’t going to be very complex, because you probably didn’t
write much code. You’re not trying to make sure ConvertTo-HTML itself works, because that’s not
Why Pester Matters 52
your code, so it’s not your problem in a unit test. Because so much of our PowerShell code is really
leveraging other people’s code, our own Pester tests are often simpler and easier to grasp.