Skip to content

RFC for *-EnvironmentVariable cmdlets. #92

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

charub
Copy link

@charub charub commented May 16, 2017

List of cmdlets:

Get-EnvironmentVariable
Set-EnvironmentVariable
New-EnvironmentVariable
Add-EnvironmentVariable
Remove-EnvironmentVariable


This change is Reviewable

List of cmdlets:

Get-EnvironmentVariable
Set-EnvironmentVariable
New-EnvironmentVariable
Add-EnvironmentVariable
Remove-EnvironmentVariable
@DarwinJS
Copy link

DarwinJS commented May 18, 2017

Just wanted to comment that a related concept is the sub-class of "Environment Variable that Contain Paths" have an entire host of their own special challenges.

And in fact, due to the special processing done by Windows on "PATH" (e.g. Merging user + system to build process path, expanding environment variables, etc) - I think it might be a further sub-class of "Environment Variables that Contains Paths".

Personally, I think that it should be separate CMDLets so that the functionality to handle these special cases is easy to discover. If it were buried in general environment variable CMDLets it would risk becoming the greatest secret feature in Windows history :)

Here is the article that outlines the many issues and is backed by working PowerShell functions:

https://cloudywindows.com/post/straight-and-narrow-code-for-safe-windows-path-updates/

@rkeithhill
Copy link

Three other commands you "might" consider are Get/Push/Pop-EnvironmentBlock. We have these implemented in PSCX and it is a very handy way to making "temporary" environment variable changes e.g.:

Push-EnvironmentBlock -Desc "Before importing VS 2015 vars"

PSCX\Invoke-BatchFile "${env:VS140COMNTOOLS}..\..\vc\vcvarsall.bat"

# Do stuff with compiler, etc and when done:

Pop-EnvironmentBlock

# Env block set back to the state when Push-EnvironmentBlock was called

This kind of thing is very handy when you need to bounce between different versions of VS compilers and tools (without your process $env:path growing like crazy).

@PowerShell PowerShell deleted a comment from msftclas Sep 27, 2017

Add the new cmdlets to PowerShell to get\set\create the environment variables for machine\process\user.

Currently users are forced to use the .Net api's to retreive\set the value of environment variables :
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest "... retrieve\set the value of environment variables for the user and machine scopes"
The environment provider works fine for setting process scoped environment variables.

$ENV:foo = "bar"

Currently users are forced to use the .Net api's to retreive\set the value of environment variables :
```powershell
[System.Environment]::GetEnvironmentVariables()
[System.Environment]::SetEnvironmentVariable("foo","bar")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest changing the example to show user or machine scope.

@BrucePay
Copy link
Contributor

For the PATH-style manipulations, how about Update-EnvironmentVariable ?

@DarwinJS
Copy link

DarwinJS commented Mar 16, 2018

I think paths should have their own switches or CMDLets - it should not manipulate only PATH because there are custom path variables.

Would be nice if scope took multiples or at least a CMDlet to also update the current process.

Set-EnvironmentVariable PATH -AddPath "c:\someplace" -AtEnd [-AtStart | -Before "value" | -After "value ] -Scope User | Machine -UpdateProcess

Set-EnvironmentVariable PATH -RemovePath "c:\someplace" -Scope User | Machine -UpdateProcess

When I look back and all the hours it takes to fixup a mess someone created with bad path updates and how many ways there is to get it wrong no matter how hard you try - I am very desirous to see PowerShell apply it's "make-it-so" intelligence in this area.

By way of remention - this tell all the ways well meaning individuals can get it wrong: https://cloudywindows.com/post/straight-and-narrow-code-for-safe-windows-path-updates/

@Stephanevg
Copy link

Stephanevg commented Mar 17, 2018

Hi,

I think it is important that we have an easy way to manage environment variables easily, and for this, these ones seem just fine to me:

Get-EnvironmentVariable
Set-EnvironmentVariable
New-EnvironmentVariable
Add-EnvironmentVariable
Remove-EnvironmentVariable

The feature I would really like to have in the end with these cmdlets, is an easier, way to update the path variable. But this affects not only the path variable in my opinion.

Addind to an existing array environment variable

The suggestion that @DarwinJS mention is too specific to the path environment variable. And this behavior would be identical, for any environment variable that contains an array. ($PsModulePath, or $PathExt are other examples, but a custom one would be the same).

I think the solution (this module / these cmdlets) should take in consideration the fact that we want to handle environment variables that contains Arrays in an easier way.

That's why I like the idea of @BrucePay for Update-EnvironnmentVariable where I could imagine something like this:

#Let's say the current value $env:path is -> C:\Program Files (x86)\Microsoft VS Code\bin;C:\Users\stephanevg\AppData\Local\Microsoft\WindowsApps;C:\Scripts

$Array = @("C:\Scripts","D:\MyFolder")

Update-EnvironmentVariable -Name "Path" -Value $Array

I would imagine in the case above, that it would simply try to add both of the values to the Path environment variable.
If let's say, "C:\scripts" is already present, it would simply consider the job is ok, and continue to the next value of the array.

I imagine it returning something like this:

Get-EnvironmentVariable -Name Path
C:\Program Files (x86)\Microsoft VS Code\bin;C:\Users\stephanevg\AppData\Local\Microsoft\WindowsApps;C:\Scripts;D:\MyFolder

Overriding existing values

I think it could also be interesting to have a have a way to override the existing variable with a new value.

#Let's say the current value $env:path is -> C:\Program Files (x86)\Microsoft VS Code\bin;C:\Users\stephanevg\AppData\Local\Microsoft\WindowsApps;C:\Scripts

$Array = @("C:\Scripts","D:\MyFolder")

Update-EnvironmentVariable -Name "Path" -Value $Array -Force

The command above would result in deleting the values that are not present in the source $array and to add all the values from that same array in the pathenvironment variable

I would expect something like this:

Get-EnvironmentVariable -Name Path
C:\Scripts;D:\MyFolder

@DarwinJS
Copy link

DarwinJS commented Mar 17, 2018

@Stephanevg - I think your idea of array handling is great!

However, this part: "....in deleting the values that are not present in the source $array ..." is not a good idea. It is a fundamental thinking flaw when engineering paths to think that one knows what else is in the path.

Although I no longer do mass Desktop OS deployments - many users are capable of adding software to their systems and this would cause mass havoc. Though in very large numbers of servers the same is true - no one individual knows what should be on the path for all machines. So I would not even make such functionality available as it will create many problems. It would be ok to remove more than one stipulated path at a time - but not remove all other paths besides a given list.

There are a couple places where paths are not like simple arrays:

  1. with regard to having some control over path precedence when other known conflicting paths are likely to be present.
  2. with regard to the fact that the Process path is an overlay of both the System and User path.

Because of #2 any path updates should always require that a scope be specified (User, Computer or Process).

Also because of #2, on removals it would be nice to specify that a path should be removed from both user and machine in the case that it appears in both (so that it is really, really gone).

@iSazonov
Copy link
Contributor

To support "array" env variables we could add -Delimiter parameter.

@rkeithhill
Copy link

rkeithhill commented Mar 20, 2018

In PSCX, we have the following command to support path env vars. I would love to see equiv functionality in the core set of commands:

03-19 18:41:36 30ms 4> gcm -m pscx *-Path*

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Cmdlet          Add-PathVariable                                   3.3.2      pscx
Cmdlet          Get-PathVariable                                   3.3.2      pscx
Cmdlet          Set-PathVariable                                   3.3.2      pscx

The PathVariable commands handle modifying the path-type variables which include more than just PATH e.g.:

03-19 18:44:02 33ms 10> Get-PathVariable -Name Include
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.13.26128\ATLMFC\include
C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.13.26128\include
C:\Program Files (x86)\Windows Kits\NETFXSDK\4.6.1\include\um
C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\ucrt
C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\shared
C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\um
C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\winrt
C:\Program Files (x86)\Windows Kits\10\include\10.0.16299.0\cppwinrt

@Stephanevg
Copy link

I would also love to have the PathVariable management to be a bit more simplified. I like the fact of having cmdlets just for that. It would the things more discoverable, and anybody could simply find it by tabbing in the console.

The $Env:Path is not the only variable where it is a bit more 'difficult' to work with.
With the variable $env:PathExt we would face the same issues as with $Env:Path and it would not be covered by the *-PathVariablecmdlets, since it does not contain paths.

Also, a user defined variable containing a delimited list , will also not be handled by the *-PathVariable

Perhaps it make sense to have both of the cmdlets sets?:

Add-PathVariable
Get-PathVariable
Set-PathVariable

AND

Update-EnvironmentVariable -Name ([String]) -Value (String[[]])  -Scope (User|Machine) -Force*

@DarwinJS
Copy link

DarwinJS commented Mar 20, 2018

Thinking about this more, I would like to make the case there should NOT be any CMDLet whose primary way of working is to replace the entire path with another value.

Background: When thinking about this challenge I always conceive of myself running the code on hundreds or thousands of production snowflaked machines. Desktop machines will always be snowflakes and this should work well there - and many servers are too. Also running on a massive number of machines should work well - or else I'm just back to using my own custom stuff as this is when the real risks of messing up are the most likely and simultaneously the most damaging.

As @Stephanevg stated the goal of dedicated CMDLets would be to ease the issue and be discoverable. However, if CMDLets are provided which replace the path value it then becomes an anti-pattern as it makes catastrophic damage very easy.

In other words I believe helping people succeed has two edges

  1. make hard stuff easier and
  2. make damaging yourself harder.

I think cleaning out the entire path should take a specific, dedicated action that makes one think about what they are doing.

For instance, "Remove-Path -All" - would remove the entire path, but require -Force to actually work and would give a warning that this is a very unusual and probably damaging action to take. And it should never remove Windows and System32 even when this action is taken. Then to hurt themselves with a complete path replacement, the following code is required (and gives one reason for pause):

Remove-Path -Path All -Scope Machine -Force
Add-Path -Path "c:\whatever;c:\yep"

Here are some other features to keep people in the pit of success:

  • Always preserve %WINDIR% and %WINDIR%\System32 on the MACHINE and PROCEESS path - no matter what the user asks to do - maybe allow this for USER scope in case they are trying to remove it because of a bad update by an installer or other code. If they feel they need to remove these, then the can use the .NET methods so that they hopefully have the needed thinking time.
  • AUTOMATICALLY backup the previous path (maybe more than one backup?). Windows does not do this (but should) and this would be a massive value add for everyone's first path update screw up since mass deployed path mistakes are so damaging. If not automatic, then an allowable option. Would prefer it is automatic and an action must be taken to negate backup.
  • before updating, check if the path addition will push the path over 260 characters and error if so - rather than create an unusable path.
  • should be idempotent with regard to adds - if the exact value is already there (case insensitive) - don't add, but report true. If this is not desireable for add-path, then another CMDLet Ensure-OnPath. Putting this right in Add-Path would get rid of the AGE OLD problem of seeing some type of install or maintenance script re-add the same path until it overflows the maximum path length.
  • at least warn (prefer error) when someone uses a literal path that contains "windows" or "system32" - but not the proper environment variables for these - as these should ALWAYS be starting with an environment variable so they are applicable on all systems they run on.
  • Always be operating on the registry path with unexpanded environment variables - possibly have an option to expand for relevant CMDLets like "Get-".
  • never allow someone to accidentally overwrite the entire path - even if it's their own fault for not duely considering the update. It's just too easy to assume path updates are simple and discover the hard way that they are not.
  • searches for a target path should default to NOT be substring searches within the individual paths, but direct matches (case insensitive).

Also I think CMDLet names should be descriptive of what you are doing - you are adding a path to a variable - not adding a new path variable. So either "Add-Path" or "Add-PathToVariable"

Add-Path -Variable -Path -Scope -Position -NumberOfBackups #position is important when the path add must be first, last or before or after something. Not giving this capability means a massive rabbit hole of engineering if I need specific path placement.
Remove-Path -Variable -Path [All] -Scope [All] #warn on -Path "All" -NumberOfBackups # for "-Path All" - make it explicit this is VERY odd and require -Force. -Scope All to allow a removal to be complete without having to issue the command 3 times.
Ensure-OnPath -Variable -Path -Scope -NumberOfBackups #would prefer "add-path" is just idempotent to avoid excessive CMDLets and can't think of any valid situation where someone actually desires the same path on a path variable multiple times.
Get-Path -Variable -ExpandEmbeddedVariables #make the default to use unexpanded registry values, but make expansion possible.
Replace-Path -SearchPath -ReplacePath -SubstringSearchEachPath -NumberOfBackups #helpful when you need to update a path and make sure it stays in the same position in the existing path.

I'm not sure what other CMDLets would be needed.

Here are some other thoughts with regard to the CMDLet set:

  • Explicitly DO NOT provide CMDLets that replace the entire path or imply that they do so, example: "Set-Path" (Set verb implies overwriting) and "Update-Path"
  • Explicitly never create functionality that does substring searches across all paths. Inevitably someone will use it for c:\program files or another common path or a substring they don't realize is part of another unrelated path.

@rkeithhill
Copy link

First +100 to everything @DarwinJS says above - although Ensure-OnPath could probably be Add-Path's default behavior (only add if not already present) with -Force to override. Second, we also need this - https://stackoverflow.com/questions/171588/is-there-a-cmmand-to-refresh-environment-variables-from-the-command-prompt-in-w

I'm trying to write scripts to provision dev machines on Windows but it requires that tool A's path mods are available immediately so I can install tool B using tool A i.e. install python (which installs pip) and then use pip to install conan. So yeah, an Update-Environment command would be very handy.

@DarwinJS
Copy link

DarwinJS commented Mar 21, 2018

Nice catch @rkeithhill - I actually thought of that and then forgot about it while writing my last post.

I would be in favor of a little more specific CMDLet name like:
Update-ProcessEnvironment or Update-ProcessEnvVars

And what if you want to update the path, but not take on the risk of other variable changes?

Update-PathVariable -Variable ?

I would also say that if there is a way to call the Windows API to do that path refresh that would be best.

A work around if the recently added path has NO environment variables is to simply add the same path to the Process Path scope. But that does not handle more complex scenarios such as environment variables in the path, nor emulating the EXACT placement of the path in the list when windows overlays Machine and User.

I'm not sure, but I would also guess that environment variables in the MACHINE path are either:

  • only resolved using environment variables ALSO in the MACHINE context (possible hacking prevention of repointing system protected values to user writable locations) OR
  • it at least prefers the value in MACHINE before electing to use one with the same name in USER context

Chocolatey has a built-in refreshenv - I don't know if it is calling Windows APIs or has it's own code.

In any case I would be very hesitant to try to reverse engineer exactly how Windows builds the final, expanded, overlayed "Process" path - if the native Windows algorithm is in accessible via API - it should be used.

@rkeithhill
Copy link

Heh, refreshenv is what inspired my request. :-)

@iSazonov
Copy link
Contributor

I believe we should consider separately *-EnvironmentVariable cmdlets and *-EnvironmentPath cmdlets.

@Jaykul
Copy link
Contributor

Jaykul commented Aug 18, 2018

I don't know if anyone has started this yet, but for what it's worth, I think that any Add-PathVariable or Set-PathVariable needs to have -Prepend and -Append options so you can choose to add a path to the beginning of the variable -- and in each case, the command should (by default?) remove other instances of the path from the environment variable and ensure the only instance is at the (front or) end.

Finally, anything that's messing with "path" variables is different from other array variables in that it should probably have a switch to allow cleaning up the path and removing entries that point to paths that don't actually exist. It's also worth mentioning that path variables can contain variables (e.g. %windir%\System) and on Windows, paths aren't case sensitive, so detecting duplicates or missing paths is a little tricky...

Obviously I should add a link to my limited implementation of this stuff for reference too 😉

@bgshacklett
Copy link

In spirit, I agree with a lot of what's being said here about safety of updates, but I feel that core Cmdlets are the wrong place for it. Rather, if one is making mass updates to workstations, one should be looking at DSC, which is specifically designed for things like idempotence, guaranteed state and performing updates in a "safe" manner.

In general, I believe simplicity should be favored for core Cmdlets and behavior that models the input of the user as exactly as possible, keeping in mind the principal of least astonishment and the question "What would I expect this Cmdlet to do if I had encountered it before". With this in mind, I certainly would not expect PowerShell to ever ignore my inputs as was suggested for one earlier edge case.

I'm hesitant to agree with the inclusion of the suggested *-PathVariable Cmdlets due to their oddly specific nature (as compared to the more generic nature of most other PowerShell Cmdlets). This is tempered by the fact that the GUI now treats the Path variable differently than other "standard" variables, but I lean more toward ensuring that the ability to handle "array-style" variables as @iSazonov alluded to earlier.

Regarding default scope of the write-oriented cmdlets, I find myself in an unfortunate position of preferring Process (with CurrentUser second on my list), but conceding that Machine is more in line with the behavior of pre-existing Cmdlets such as Install-Package. It bothers me that the mentality of configuring everything at the system level and assuming Administrator privileges is still so prevalent.

With the Get-EnvironmentVariable Cmdlet, I would note that returning the environment variables for the Machine scope by default would put the behavior of this Cmdlet at odds with the behavior of using the $Env prefix or using the Environment Provider, both of which return a merger of variables declared in all scopes. I would suggest that it may make sense for this Cmdlet to do the same by default.

@rkeithhill
Copy link

one should be looking at DSC

I think making this available only via DSC is too limiting. When we bootstrap/configure a build environment for a dev, we are doing this for each particular Git repo, not machine wide. For this scenario, a set of environment variable management commands is the way to go IMO.

@DarwinJS
Copy link

DarwinJS commented Oct 9, 2018

I agree with @rkeithhill that DSC is too limiting - in fact this should be in the smallest powershell footprint because environment variables are a foundational part of an operating system.

@mklement0
Copy link
Contributor

mklement0 commented Oct 12, 2018

Let me try to summarize the existing discussion and make some recommendations:

3 distinct areas of functionality have emerged:

  • (a) Support for modification of persisted environment variables and their values (Windows-only) - the original topic of this RFC.

  • (b) Support for robust modification of the entries in $env:PATH and similar variables.

  • (c) Support for selective, temporary modification of the process environment so as to run commands with a specific environment, as well as refreshing the current process' environment from the (updated) persisted state, on Windows.


I suggest keeping this RFC focused on (a) only; see my suggestions in a follow-up post.

Below is a summary and food for thought for spinning off (b) and (c):


(b), due to the many pitfalls and subtleties involved, as detailed by @DarwinJS, definitely deserves its own (set of) cmdlet(s).

  • $env:PSModulePath requires the same treatment as $env:Path, because its effective value too is the concatenation of the machine-level and user-level definitions (although it is PowerShell itself that performs this concatenation, whereas it is Windows that does it for all processes in the case of $env:PATH).

  • As an alternative to multiple *-PathVariable cmdlets, I propose a single cmdlet that supports all desired operations, along the following lines:

Update-PathVariable
  [[-Name]=<string>='PATH'] [-Remove <string[]>] [-Prepend <string[]>] [-Append <string[]>] 
  [-InsertBefore <string[]>] [-InsertAfter <string[]>]
  [-Scope = 'Process'|'User'|'Machine'] 
  [-Force]

This makes it clear that changes should be limited to modifications of the existing value, as opposed to (potentially catastrophic) full replacement.

Scopes User and Machine would only be supported on Windows, just like in CoreFx.

Apart from PATH and PSModulePath, other variables such as $env:PATHEXT and $env:CLASSPATH can also benefit from this entry management, though note that they're not subject to the implicit concatenation of machine- and user-level definitions (Windows).

The platform-appropriate (and confusingly named) [IO.Path]::PathSeparator] must be used as the entry separator.

One challenge on Windows is how to handle unexpanded entries such as %SystemRoot%\system32 when matching existing entries with -Remove and -InsertBefore / -InsertAfter, though these are typically limited to system-defined entries that shouldn't be modified anyway; allowing matching by both the raw and expanded values sounds useful.

Not finding targeted entries should be a quiet no-op (with -Remove) or fall back to -Append (with -InsertBefore, -InsertAfter); with multiple operations specified, -Remove must be performed last.


As for (c):

  • The Push-Environment / Pop-Environment cmdlet pair mentioned by @rkeithhill sounds promising.

  • I suggest complementing it with an Invoke-WithEnvironment cmdlet, that allows invocation of a single command (script block) with a temporarily modified environment.

    • E.g., Invoke-WithEnvironment @{ PORT=8080 } { node index.js }, which would temporarily set (create or override) $env:PORT for the node call and restore the previous value, if any, afterwards; a simple PowerShell implementation can be found in this Gist.
    • Alternatively, a dedicated, POSIX-shell-like syntax could be implemented, as discussed in Provide simple one-time setting of environment variable PowerShell#3316
  • As for refreshing the current process' environment from updated persisted definitions:

    • Drawing inspiration from Chocolatey's Update-SessionEnvironment cmdlet, which @rkeithhill linked to above, makes sense for an Update-Environment cmdlet.
    • It makes sense to extend it to accept the name(s) of specific variables for selective refreshing, as @DarwinJS proposes.
    • There would be no point in including this cmdlet in the Unix versions of PowerShell Core.

@mklement0
Copy link
Contributor

mklement0 commented Oct 12, 2018

As for (a), the original topic of this RFC:

Instead of creating new cmdlets, I suggest extending the existing Environment drive provider.

The env: drive and the generic *-Item / *-Content cmdlets already support all basic operations - all they're missing is a (dynamic) -Scope parameter in the context of the Environment provider.

# Create:
New-Item env:foo -Value bar # same as: $env:foo = 'bar' (implicit creation)

# Create / update:
Set-Item env:foo -Value baz # same as: $env:foo = 'baz'
# Set-Content env:foo -Value baz would work too.

# Get:
Get-Item env:foo # returns a [System.Collections.DictionaryEntry] instance with .Key/.Name and .Value
Get-Content env:foo # same as: $env:foo

# Remove:
Remove-Item env:foo   # same as: $env:foo = ''

With the addition of a dynamic -Scope parameter to all these cmdlets (note that Add-Content is not supported, and rightfully so), Process would remain the default scope, as currently.

This means that namespace notation - e.g., $env:foo - would only allow access to the Process scope - as currently - whereas Machine and User scope operations would require the *-Item (and *-Content) cmdlets.

Given that manipulating persisted values should be a more deliberate action anyway, that distinction strikes me as reasonable.

As stated, the Machine and User scopes would only be supported on Windows.

More things to consider:

  • Should updating a persisted value also update the in-process variable, or should that require a separate act? Perhaps a switch to control allow selecting either behavior?

    CoreFx does not update the current process' variable, but I've always found that curious - and I've gingerly suggested adding UserAndProcess and MachineAndProcess enumeration values to the target parameter of Environment.SetEnvironmentVariable() (see here).

  • Perhaps $env:PATH and $env:PATHEXT and $env:PSModulePath should be protected from commands such as Remove-Item env:PATH -Scope Machine and Set-Item env:PATH -Scope Machine -Value ...: they could fail by default and require -Force.

@SteveL-MSFT
Copy link
Member

@PowerShell/powershell-committee discussed this, in general, we are not against having multiple ways to manage env vars so having cmdlets doesn't preclude improving the env provider (and adding the dynamic -Scope parameter). Another alternative is to have containers within env: such as env:machine\foo for declaring scope or following the registry provider envmachine:foo.

@vexx32
Copy link
Contributor

vexx32 commented Jan 7, 2019

Given that scopes are an existing concept that typically have the prefix pattern provider:scope:item, is that something we should follow or is that a deliberate break from the PowerShell-specific notion of scope, @SteveL-MSFT?

$env:machine:Path (etc)

@SteveL-MSFT
Copy link
Member

@vexx32 using the existing syntax would make sense

@DarwinJS
Copy link

DarwinJS commented Jan 7, 2019

What would happen when you embed an env var reference in the "$env:machine:path" nomenclature? (FYI these must be regular windows shell references like "%WINDIR%").

We should be very careful to disambiguate the very special scope of the PATH variable and all it's challenges versus a general environment variable.

@vexx32
Copy link
Contributor

vexx32 commented Jan 7, 2019

Are those cases handled at all, currently?

@mklement0
Copy link
Contributor

@SteveL-MSFT

we are not against having multiple ways to manage env vars so having cmdlets doesn't preclude improving the env provider

Can I ask why you're for having multiple ways? What is the benefit of introducing multiple PowerShell solutions in this case (using the .NET types directly is always an option)?

The down-sides are potential confusion (why are there different approaches, and how do they relate to each other?), and additional implementation and documentation effort.

Or do you think that the provider improvements cannot provide all required functionality?

In general, I think it's worth giving the provider model a little more love.

@vexx32
Copy link
Contributor

vexx32 commented Jan 8, 2019

Several providers do have their own cmdlets (see: *-Alias, *-Variable, etc.) and there could potentially be a place for *-EnvironmentVariable cmdlets, especially where potentially needing to resolve embedded env variable references in %PATH% is concerned.

However, I completely agree that improving the provider should come first.

@DarwinJS
Copy link

DarwinJS commented Jan 8, 2019

One thing that is getting lost here is that I opted in that the PATH environment variable is NOT LIKE ANY OTHER environment variable due to all these issues: https://cloudywindows.io/post/straight-and-narrow-code-for-safe-windows-path-updates/

With regard to all other environment variables:
$env: only manipulates process scope and so was overly artificially limited in it's original conception for SETTING environment variables as it only does it in the process context like windows shell "set" and %varname%. Maybe it should stay namespace agnostic in keeping with the ages long expectations - and then provide CMDLets for better coverage of setting and getting?

However, PowerShell's promiscuous typing and methods hold to the pragmatics of valuing discovery of how to do things over and above "techinical purity" (of non-duplication of APIs, et al.) In that I know I've personally had much more benefit from the pragmatic focus than the technical purity one.

@SteveL-MSFT
Copy link
Member

@mklement0 not being against something doesn't mean we are "for" something. The discussion we had is that PSProviders may be more difficult to use than cmdlets so having cmdlets in addition to the provider isn't necessarily a bad thing. Personally, I'm ok with the provider only solution as that's what I'm accustomed to, but I probably don't represent the general user.

@mklement0
Copy link
Contributor

@SteveL-MSFT:

PSProviders may be more difficult to use than cmdlets so having cmdlets in addition to the provider isn't necessarily a bad thing.

As @vexx32 rightfully points out, this duplication has ample precedent.

Yet, precedent isn't in and of itself justification: My guess is that the duplication is part of the problem of neglecting the provider paradigm. If every area of functionality has dedicated cmdlets, the provider-based alternative becomes irrelevant, and the powerful abstraction underlying the provider model isn't broadly recognized.

@DarwinJS: The discovery problem is well worth considering, but I would argue that sticking with the provider model is not about technical purity - on the contrary: it's about providing useful abstractions that can be applied to different areas of functionality, reducing learning effort.

Now there may be areas where the provider model becomes too restrictive, but management of environment variables seems like a good fit, especially given that the current - limited - access is wholly based on the provider model, via namespace variable notation ($env:foo being shorthand for Get-Content env:foo), even though that may not be widely known (which is part of the problem of the provider paradigm languishing in partial obscurity).

As for your concerns re $env:PATH: I share them, which is why I proposed providing a dedicated Update-PathVariable (perhaps Update-PathEnvironmentVariable) cmdlet for manipulating PATH[-like] environment variables above, combined with preventing replacing the full value via Set-Item, except with -Force - see above

If we apply the same safeguard to namespace notation - e.g., $env:machine:PATH - I think it's fine to allow this scoped access, especially given that this syntax is already supported with other providers, as @vexx32 notes; e.g., ${function:global:clear-host}

As for reading / writing the unexpanded values (raw REG_EXPAND_SZ values):

  • For getting, I suggest implementing that as another dynamic provider parameter, -Raw; e.g., Get-Item Env:Path -Scope Machine -Raw.

  • For setting we could implement something like -Expandable, while also respecting a preexisting REG_EXPAND_SZ definition implicitly.

It's interesting to note that [environment]::Get/SetEnvironmentVariable() lacks support for unexpanded values (direct manipulation of the registry is required).

@joeyaiello
Copy link
Contributor

We think some of this is doable but not incredibly necessary or high-priority at this point. @charub had written some of the code originally before the RFC was written, I'm going to follow up offline with her to see if there's a starting place / minimum viable that we can ship.

@joeyaiello
Copy link
Contributor

Spoke to @charub today, and she thinks she might have the code lying around somewhere for us to give it a look.

@joeyaiello
Copy link
Contributor

@JamesWTruher is going to be releasing an equivalent in the Gallery as a place to incubate the module before we decide whether and when it should be included as part of PowerShell. There will be a new RFC describing the behavior of that module if we decide we want to take it in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.