Cmdlet name clashes in PowerShell: What to do?

This question has been asked in various ways over the last few years and I don't believe an answer that suits everyone has been proffered yet. I think this is part of a broader problem space that needs to be solved, one that I (and many others) have spent a bit thinking about -- for me personally, it's been mostly in the pub strangely enough, usually with a pint in hand -- and while I don't profess to have the answer, I do spent most of my powershell time tinkering with providers and have some views on this ;-)

Firstly, if you haven't seen this suggestion I've raised on connect (Allow providers other than the filesystem provider to surface commands) then take a gander at it now. It suggests allowing providers other than the FileSystemProvider to surface commands by using a new ProviderCapabilities flag. For those not able to read this suggestion, the bottom line is that currently one can execute a command on the filesystem by using the following syntax:

ps c:\> .\test.bat
hello, world

However, if you had another provider that linked into a mobile device, the amazon s3 service or MSL skydrive (when are they going to release an API?) for example, you would allow execution of commands with the same syntax, e.g.

ps skydrive:\> cd ppts
ps skydrive:\ppts> .\mydemo.ppt
The term '.\mydemo.ppt' is not recognized as a cmdlet, function, operable program, or script file. Verify the term and try again.
At line:1 char:8
+ .\mydemo.ppt <<<<

As you can see, this doesn't work.

What's important is having the same experience with similarly capable providers; e.g. those that can host executable content. Yes, you can implement support for invoke-item, but it's a bit discordant. One of the nicest features of powershell (and sometimes the most confusing) is that all providers - variable, function, environment, filesystem etc all hook into the same framework. There are some philosophically irksome differences like the fact that the  variable drive is the "default" provider, since dollar-qualified expressions are assumed to point there if not qualified with a drive name:

ps c:\> $host
Name             : ConsoleHost
Version          : 1.0.0.0
InstanceId       : 9d8a29bf-3d84-4ce6-8651-e0c72afb404b
UI               : System.Management.Automation.Internal.Host.InternalHostUserInterface
CurrentCulture   : en-CA
CurrentUICulture : en-US
PrivateData      : Microsoft.PowerShell.ConsoleHost+ConsoleColorProxy

so let's give a drive name this time:

ps c:\> ${c:test.bat}
@echo hello, world

This is a lot easier to understand once you realise that the '$' prefix is a grammatical shortcut into the IContentReader/IContentWriter interfaces on every provider; Much easier than just blindly committing to memory the method to read a variable and the method to read a file (imho). Once you introduce this capability into other providers, you then have to address ye olde $env:path variable. Currently this variable is imported from the system environment. The system, aka Windows, knows nothing about powershell and its drives. When using Get-Command, the search order for discovering commands is:

Aliases, Functions, Cmdlets, Scripts, Commands located in the directories specified by the Path environment variable, External scripts.

As we can see, Commands (5) are discovered via the $env:path variable. The other items (apart from 3 - cmdlets) all live in a flat namespace, so there's no path involved there.  I'd love if somehow it were possible to add any powershell path to this variable, even if they were limited to drive-qualified paths:

ps c:\> $env:path
c:\windows; c:\windows\system32; ...; s3:\utilities; "mobile:\storage card"; ...

Perhaps this information could be persisted inside PowerShell only so when the shell is exited, the path environment variable remains unchanged when viewed from the external windows system. This might mean that PowerShell paths would have to be appended in your profile at each load, but this isn't a bad thing either, IMO. So finally, on to the crux of the matter: disambiguation of identically named Cmdlets. Ultimately I don't believe there is a magic answer. This is solved in Windows by using the path variable, and so I believe it isn't such a bad idea to solve it with a path variable in powershell too. Behold $env:cmdletpath

ps c:\> $env:cmdletpath
Microsoft.PowerShell.Core; Microsoft.PowerShell.Host; Microsoft.PowerShell.Management; Microsoft.PowerShell.Security; Microsoft.PowerShell.Utility; VMWare.Commands.Utility

It's simple. It's optional. Snap-in qualified commands would continue to work, overriding the cmdlet search path. Instead of having to alias multiple commands when using a snapin that replaces a suite of built-in cmdlets, you can just re-jig the search path. Done. It's not the answer to everything, but it sure would make life a bit easier, no?

Visual Studio 2008 + .NET 3.5 SP1 Beta Experiences

I have two machines at home here, one is a Vista SP1 laptop, the other an XP SP3 Desktop. The latter was recently patched with SP3 in a desperate attempt to prolong its dwindling desire to function in a reasonable fashion. It's what I call from a development box point of view, "Encrusted." Encrustation is the point at which you no longer have any idea what beta, CTP, evaluation or otherwise not recommended software is installed. It defies its digital nature by throwing up different errors each time its booted. Anyway, the SP1 Beta would not install on this machine, specifically the .NET 3.5 beta bits. I may investigate further, but frankly I think it's time for fenestrecide and a corresponding rebirth.

This led me to try updating my Vista laptop instead. If you download the VS2008 patch you'll notice it's only about 450KB. It's a stub which detects what's missing from your environment and downloads only what it needs. I found the installer to be extremely useless in terms of giving feedback over what it was doing. It just says "installing" and you have to watch a progress bar slowly creeping across with many stalls where your machine doesn't seem to be doing anything at all. This time I downloaded the separate .NET 3.5  SP1 beta bits which size about 220MB. I installed this first and it seemed to go OK. Next, I downloaded the VS patch and let it go ahead. Again many stalls where you're wondering if it's actually going to work at all. Eventually, it failed. After some examination of the logs, I discovered that it didn't like the post-RTM patches for supporting the Reference Source server ( Shawn Burke's Blog - Configuring Visual Studio to Debug .NET ). After removing these updates, I was able to progress to past this prior point of failure but by this time it was midnight, having started the process in earnest at around 8pm. I decided to leave it to run overnight.

In the morning I had a stalled install process and a dialog notifying me that files were in use and that I should shutdown "Machine Debug Manager," "Windows Sidebar Component" and "Windows Sidebar" - Vista GUI cruft - if I wanted to avoid a reboot. I stopped the MDM service and shutdown Windows Sidebar and chose "Retry" from the options of "Retry," "Cancel" and "Ignore." Even though I hit "Retry," the bland dialog displaying "installing" now switched to "Installation failed.. rolling back" - but thankfully, it was NOT rolling back. The status bar appeared stalled for about 10 minutes then popped up the same dialog, this time with only "Windows Sidebar" listed. This time I chose "Ignore" implicitly accepting the penalty of a reboot to allow the status bar to continue its inexorable journey to the right, all the time the text telling me that the installation had failed and it was rolling back. Ten more minutes and the install succeeded. To summarise, VS seems snappier, and everything seems to work fine. I'll post more if I discover anything of interest.

The usual suspects have more information:

Beta of .NET 3.5 and VS2008 SP1 is out (Scott Guthrie)

VS2008 and .Net 3.5 SP1 Beta - Should You Fear This Release- (Scott Hanselman)

.NET 3.5 SP1 Beta- Changes Overview (Patrick Smacchia)

PowerShell 2.0 CTP2 Problems with WinRM: Access is Denied

After installing the new WinRM 2.0 and PowerShell 2.0 CTP bits onto my Vista/SP1 laptop, I kept getting "Access is denied" messages continually while running the "Configure-WSMan.ps1" script. Fellow MVP Richard Siddaway discovered that disabling UAC seemed to clear up the problem for him, but this is not really a good solution. I want to keep UAC enabled. It turns out also that another precondition for this error is that your machine is not joined to a domain or is in a workgroup/standalone. After some communication with the PowerShell team, who in turn talked to the WinRM team, it appears that some additional configuring is needed for machines in this situation:

If the account on the remote computer has the same logon username and password, the only extra information you need is the transport, the domain name, and the computer name. Because of User Account Control (UAC), the remote account must be a domain account and a member of the remote computer Administrators group. If the account is a local computer member of the Administrators group, then UAC does not allow access to the WinRM service. To access a remote WinRM service in a workgroup, UAC filtering for local accounts must be disabled by creating the following DWORD registry entry and setting its value to 1: [HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System] LocalAccountTokenFilterPolicy.

This is taken from http://msdn.microsoft.com/en-us/library/aa384423.aspx

This information can also be found buried in one of PowerShell 2.0's help files, accessed via:

ps> get-help about_remote_faq | more

How to "copy con text.txt" in PowerShell (updated)

Now this is barely worth blogging, but one of the things I used a lot when I was confined to cmd.exe (yes, "confined" is the word I would use, 4NT aside) is the wonderfully simple copy con filename.txt then type a few lines and end it all with CTRL+Z and enter. So, if you're a PowerShell noob and yearn for this olden-days simplicity like dead parrots pine for the fjords, then this is for you:

  1. rm function:copy-console -ea 0 # silentlycontinue   
  2. rm alias:cc -ea 0   
  3.   
  4. # encoding can be String, Unicode, Byte, BigEndianUnicode, UTF8, UTF7, Ascii   
  5. function global:copy-console {   
  6.        
  7.     param(   
  8.         [string]$Filename = $(Throw "Need output filename."),   
  9.         $Encoding = "ASCII"  
  10.     )   
  11.        
  12.     $out = [io.path]::combine($pwd$Filename)   
  13.   
  14.     $buffer = @()   
  15.     $crlf = "`r`n"  
  16.        
  17.     do {   
  18.         $line = [console]::readline()   
  19.         if ($line -eq $null) { break; }   
  20.         $buffer += $line  
  21.     } while ($TRUE)   
  22.   
  23.     $buffer | set-content $out -Encoding $Encoding  
  24. }   
  25. new-alias cc copy-console   
  26.   
  27. # Usage:   
  28. #   
  29. # PS> cc test.txt -Encoding utf8   
  30. # bleh   
  31. # moop   
  32. # vlorg   
  33. # ^Z   
  34. # PS> cat test.txt   
  35. # bleh   
  36. # ...   
  37.   

I saved this to copy-console.ps1 and aliased it to "cc." Of course, you can do whatever you want - it's probably easier to just put it into a function in your profile. Just place the script above into your profile and remember: CTRL+Z then enter to save.

UPDATE 2008-04-04: Somehow I completely broke this in my attempts to "clean it up" before posting. I've reposted a better version (imho), and implemented encoding support as suggested out in Jason's comment below ;-)

Create your own custom &quot;Type Accelerators&quot; in PowerShell

PowerShell has several "type accelerators" which are used exactly like a casting operation. Examples of these special operators are [xml] and [wmi]. The former is used for quickly converting a string of xml into a fully-fledged System.Xml.XmlDocument object.

Often I find myself converting things to and from hexadecimal using the -f operator, but this always seemed like just a little too much typing for me. Enter the [hex] accelerator type:

image

As you can see from the source below, there's no magic here. This is just a straight cast, but I have no namespace. If I had a namespace, say, like "Nivot.PowerShell", we'd have to cast using [nivot.powershell.hex] instead of just [hex]. All of the trickery is done using operator overloads in C#. These tells .NET (and in turn, powershell) how to behave should someone try to add, subtract, remove or divide our instances.

  1. using System;  
  2. using System.Collections.Generic;  
  3. using System.Text;  
  4.  
  5. public class Hex  
  6. {  
  7.     private readonly int _value = 0;  
  8.  
  9.     private Hex(int value)  
  10.     {  
  11.         _value = value;  
  12.     }  
  13.  
  14.     private Hex(string value)  
  15.     {  
  16.         _value = Convert.ToInt32(value, 16);  
  17.     }  
  18.  
  19.     public static implicit operator Hex(int value)  
  20.     {  
  21.         return new Hex(value);  
  22.     }  
  23.  
  24.     public static implicit operator int(Hex value)  
  25.     {  
  26.         return value._value;  
  27.     }  
  28.  
  29.     public static explicit operator Hex(string value)  
  30.     {  
  31.         return new Hex(value);  
  32.     }  
  33.  
  34.     public static Hex operator +(Hex op1, Hex op2)  
  35.     {  
  36.         return new Hex(op1._value + op2._value);  
  37.     }  
  38.  
  39.     public static Hex operator -(Hex op1, Hex op2)  
  40.     {  
  41.         return new Hex(op1._value - op2._value);  
  42.     }  
  43.  
  44.     public static Hex operator *(Hex op1, Hex op2)  
  45.     {  
  46.         return new Hex(op1._value * op2._value);  
  47.     }  
  48.  
  49.     public static Hex operator /(Hex op1, Hex op2)  
  50.     {  
  51.         return new Hex(op1._value / op2._value);  
  52.     }  
  53.  
  54.     public override string ToString()  
  55.     {  
  56.         return "0x" + _value.ToString("X");
  57.     }  

The next step is to tell PowerShell's formatter what to do with the new type. Here's a simple format definition that tells the formatter to call ToString() on the Hex instance. This is the method that does the conversion by calling ToString("X") on the integer field. "X" means format the integer as hexadecimal using upper case. A lower-case "x" would output the value using lower-case (if you couldn't guess ;-)).

  1. <Configuration> 
  2.   <ViewDefinitions> 
  3.     <View> 
  4.       <Name>Hex</Name> 
  5.       <ViewSelectedBy> 
  6.         <TypeName>Hex</TypeName> 
  7.       </ViewSelectedBy> 
  8.       <CustomControl> 
  9.         <CustomEntries> 
  10.           <CustomEntry> 
  11.             <CustomItem> 
  12.               <ExpressionBinding> 
  13.                 <ScriptBlock>$_.ToString()</ScriptBlock> 
  14.               </ExpressionBinding> 
  15.             </CustomItem> 
  16.           </CustomEntry> 
  17.         </CustomEntries> 
  18.       </CustomControl> 
  19.     </View> 
  20.   </ViewDefinitions> 
  21. </Configuration> 

If we don't load this format file, PowerShell just emits a couple of blank lines when you try to use it.  You'll notice from the screenshot above that the blank lines still appear. I'm not sure how to remove these - it looks pretty ugly compared to the output of the [int] object. If you want to play with this, you can download the zip file below and unzip the contents into a single folder and run "hex.ps1". I didn't bother with a full Snap-In, it just loads the DLL using reflection. It also loads the format ps1xml too. Have fun.

HexAccelerator1.zip (2.4 KB)

Excluding getters and setters from get-member output using a filter

Just a minor thing, but this is a filter I've been using for a while now as a replacement for Get-Member. It's always annoyed me that the Get_ and Set_ methods are returned along with the actual properties. The only place that's actual useful is when you're using an object wrapped in the [xml] adapter since those objects do not expose the XmlDocument's properties in the adapted member set.

update: if you use (of course you do!) MoW's PowerTab, you can disable display of the accessor methods with $PowerTabConfig.ShowAccessorMethods = $false.

  1. filter get-memberex {   
  2.     $_ | gm | ? { -not($_.name -match "^[gs]et_.+") }   
  3. }   
  4. new-alias gmx get-memberex  

SharePoint 2007 and the mysterious "CendentialKey"

Discovered while playing around with the new SPUserToken class and associated goodness that is the new impersonation APIs in WSS 3.0 / SPS 2007. from Reflector:

public static void SetApplicationCendentialKey(SecureString password)
{
    SPCredentialManager.CreateApplicationCendentialKey(password);
}

Ouch! how did that get past QA? there are only five methods on the class - looks like it was a late night for whomever wrote these bits. Unless I'm very much mistaken, shouldn't that read Credential?

microsoft.sharepoint.spsecurity.setapplicationcendentialkey

Manipulating remote SharePoint Lists with PowerShell

Someone on the PowerShell usenet group asked if it was possible to interact with SharePoint lists through our favourite little shell. Marco Shaw responded and put the pressure on by saying this was my bag of tricks. Who am I to say otherwise? so lets take a look at the recipe:

  1. One Get-WebService script
  2. One SharePoint List (any flavour but document library based)
  3. One Invoke-ListOperation script
Note: this script can be invoked from any workstation running PowerShell. You don't need access to SharePoint DLLs or other such nonsense.

Step One: Creating the Web Service proxy

Download my get-webservice2.ps1 (save and rename to .ps1) script and build a proxy to any remote Lists webservice. I say any, because you only need build the proxy once. You can switch sites at any time by changing the Url property of the $service object. You will be prompted by PowerShell with a graphical prompt for credentials for the SharePoint site, unless you specify -Anonymous as a switch (of course, if your SharePoint site is not anonymous enabled, this is a bit of a silly move.)

$service = .\get-webservice2.ps1 http://sharepoint/_vti_bin/lists.asmx

later, if you want to work with a different site, change the Url property like this:

$service.Url = "http://sharepoint/sites/root/subsite/_vti_bin/lists.asmx"

Ok, now we're ready to try the different operations: New, Update and Delete.

Step Two: Insert a new item

I chose the [hashtable] object to represent a new item as there's a nice syntax baked into powershell creating an initialising such a structure. Lets work with the Announcements list type. The two main fields I'm going to work with are the Title and Body fields. Note that even in a localised version of SharePoint - let's say French - where the fields are displayed as "Titre" (title) and "Corps" (body), we still use Title and Body for specifying the fields.
PS C:\> $item = @{Title = "Oisin"; Body="Hello, Word."}
PS C:\> $result = .\invoke-listoperation.ps1 $service new announcements $item
Success.
PS C:\> $result.row.ows_ID
2
If you get a success result back, the $result variable can be interrogated for the row that was just inserted. Above, I am retreiving the new ID for use in the next step.

Step Three: Modify an existing item

You probably noticed that the body of the announcemenet I just posted says "Word" instead of "World." Ooops! All we need do now is assign the $item variable the ID of the newly inserted row, modify the Body and use the Update command for our script:
PS C:\> $item.ID = 2
PS C:\> $item.Body = "Hello, World."
PS C:\> $result = .\invoke-listoperation.ps1 $service update announcements $item
Success.
Yee-ha. Fixed. Now it's time to remove such a pointless announcement.

Step Four: Delete an existing item

All we need to do now is pass the Delete command and a hashtable with an ID key set to the ID of the item we wish to delete. We can reuse the $item object at this point since it has the ID set now from the last update operation (alternatively, you could just pass @{ID=2} as the item argument - same effect).
PS C:\> $result = .\invoke-listoperation.ps1 $service delete announcements $item
Success.
or alternatively:
PS C:\> $result = .\invoke-listoperation.ps1 $service delete announcements @{ID=2}
Success.
And there you have it - announcement deleted. It's not that hard once you have the web service proxy bit done -- believe me, that was the hard part ;-)

Step Five: Profit!

Sorry, this part is up to you! Seriously though, if you are not on the SharePoint bandwagon by now, what's wrong with you?! It is an utterly incredible product. I've been working with SharePoint in all its various guises since the days of the Digital Dashboard (remember that?) and WSS 3.0 / MOSS 2007 is the most amazing iteration yet. Btw I have't tested this script on WSS 2.0, but it should work without any modification.

Here's the invoke-listoperation.ps1 script. Just copy and paste it into notepad, save it and away you go!

  1. param (   
  2.     $Service = $(throw "need service reference!"),   
  3.     $Operation = $(throw "need operation: Update, Delete or New"),   
  4.     $ListName = $(throw "need name of list."),     
  5.     [hashtable]$Item = $(throw "need list item in hashtable format.")   
  6. )    
  7.   
  8. # check if valid service reference provided   
  9. [void][system.Reflection.Assembly]::LoadWithPartialName("system.web.services")   
  10. if ($service -isnot [Web.Services.Protocols.SoapHttpClientProtocol]) {   
  11.     Write-Warning "`$Service is not a webservice instance; exiting."  
  12.     return  
  13. }   
  14.   
  15. # check if valid operation (and fix casing)   
  16. $Operation = [string]("Update","Delete","New" -like $Operation)   
  17. if (-not $Operation) {   
  18.     Write-Warning "`$Operation should be Update, Delete or New."  
  19.     return  
  20. }   
  21.   
  22. $xml = @"  
  23. <Batch OnError='Continue' ListVersion='1' ViewName='{0}'>  
  24.     <Method ID='1' Cmd='{1}'>{2}</Method>  
  25. </Batch>  
  26. "@   
  27.   
  28. if ($service) {   
  29.     trap [Exception] {   
  30.         Write-Warning "Error: $_"  
  31.         return;        
  32.     }   
  33.        
  34.     $listInfo = $service.GetListAndView($ListName"")   
  35.        
  36.     $listItem = ""  
  37.     foreach ($key in $item.Keys) {   
  38.         $listItem += ("<Field Name='{0}'>{1}</Field>" -f $key,$item[$key])   
  39.     }   
  40.        
  41.     $batch = [xml]($xml -f $listInfo.View.Name,$operation,$listItem)   
  42. }   
  43.   
  44. $response = $service.UpdateListItems($listInfo.List.Name, $batch)   
  45.   
  46. $code = [int]$response.result.errorcode   
  47.   
  48. if ($code -ne 0) {   
  49.     Write-Warning "Error $code - $($response.result.errormessage)"     
  50. else {   
  51.     Write-Host "Success."  
  52.     $response.Result   
  53. }  

Writing Portable Code

My last post got me thinking about the problems experienced when trying to write culture aware software. Yeah, I know it was actually me that was unaware of the culture, but this time it's about the software end of the deal; in particular, the recently updated Microsoft Business Data Catalog Definition Editor for Microsoft's popular SharePoint 2007 server. If you read some of the comments on the blog, you'll see that various people (using a non US English version of Windows) have installed it and have come across a problem where the tool cannot find the local security group called "Builtin\Users." Oops. In the world of cutting-edge technology, people often install software that doesn't match the installed language of their O/S. The fact of the world is that all major symbolic computer languages are based around English, and the most popular software gets written in English first. Here in Quebec, Canada, French is the primary language with English coming second (Canada is officially bilingual - although most of the country only speaks English). Localization of software takes a fair amount of time. It's not just translating a resources file - there are hot-keys to reassign (the Bold shotcut in French MSWord is CTRL+G for example, bold being Gras in French) dialog boxes to resize, labels and controls to reposition etc. Some languages are more verbose than others and end up with text that won't fit. However, there are things you do to avoid certain problems -- lets take the issue above as an example.

Logins and Group names are just an abstraction in the Windows security subsystem. These things are actually represented by value called a SID ( system.security.principal.securityidentifier ). No matter what version of Windows you use, the SIDs for built-in accounts and groups are the same:

First using an en-US system:

  1. PS > $acc = new-object System.Security.Principal.NTAccount "Users" 
  2. PS > $acc.Translate( [System.Security.Principal.SecurityIdentifier] ).value  
  3. S-1-5-32-545 

and a French (fr-FR) system:

  1. PS > $acc = new-object System.Security.Principal.NTAccount "Utilisateurs" 
  2. PS > $acc.Translate( [System.Security.Principal.SecurityIdentifier] ).value  
  3. S-1-5-32-545 

As you can see, the SID is the same: S-1-5-32-545. An example of this is shown below - a simple If-Elevated function that takes two Scriptblocks: the first is executed if the user is running as an administrator, the second is running if the user is just a plain well, user:

  1. # Usage:  
  2. #  
  3. # If-Elevated { .. admin code .. } { "sorry, need admin" }   
  4. #  
  5.  
  6. function If-Elevated {  
  7.   param(  
  8.     [scriptblock]$AsAdmin = $(Throw "Missing 'as admin' script"),  
  9.     [scriptblock]$AsUser= $(Throw "Missing 'as user' script")  
  10.   )  
  11.    
  12.   $identity = [security.principal.windowsidentity]::Getcurrent()  
  13.   $principal = new-object  security.principal.windowsprincipal $identity 
  14.   $adminsRole =  [system.security.principal.securityidentifier]"S-1-5-32-544" 
  15.                   
  16.   if ($principal.IsInRole($adminsRole)) {  
  17.     & $AsAdmin 
  18.   } else {  
  19.     & $AsUser 
  20.   }  
  21. }  
So ok, it doesn't have localized messages, but at least it will execute correctly on other locales ;-) Have fun.

When is a scripting language not a language?

Some weeks ago, I started a new contract for a pretty monstrous MOSS (Microsoft Office SharePoint Server) 2007 project. The thing is, this is my first pure Francophone environment since I first came to Canada four years ago. As the agency is part of the Canadian Government -- and located in Quebec -- most of the software installed is the French version. The keyboards are fr-ca, Windows is French, and yep, SharePoint is installed in French. It's proving quite difficult to find my way around as the translations are not really comparable. Sometimes they are not even close. It's worse though, because a lot of the idioms are France-French, not Quebec-French. As Quebecers (and confused French people) will tell you, it can be quite a different language sometimes.

Today, it got a lot worse.

I found myself having to define a calculated column - that is to say, a cell in a list that performs calculations based on other cells in the row. You've got the usual SUM, AVG etc functions available. Only except this time I don't. After several frustration attempts, I discovered that the scripting language itself has been translated into French. At first my reaction was incredulity - what is the point of that? They don't translate C# for other cultures, so why do that? Surely this kind of functionality is aimed at power users, like Excel users! they don't translate the Excel formulas in other locales of Office?!

Except they do.

Merde.

About the author

Irish, PowerShell MVP, .NET/ASP.NET/SharePoint Developer, Budding Architect. Developer. Montrealer. Opinionated. Montreal, Quebec.

Month List

Page List