Search
Recent Tweets

Entries in release management (14)

Wednesday
Jul042012

Trying Out Jobs in PowerShell

An older app in our workplace stack is a webforms website project and it has a large enough directory structure to take a while to compile the site. I have to run the site a fair amount for a rewrite effort and it changes enough to make the initial build and run painfully slow.

Since I have been working on a build related PowerShell module lately, I thought precompiling the "legacy" site might be a good candidate for a background job. I have not used jobs in PowerShell before and for whatever reason I was having a hard time finding good, complete examples of using them. There were also some things that tripped me up, so here is an example for the future version of me to reference some day.
param (
	$TrunkPath = "D:\Projects\MyApp\trunk"
)	

function Start-WebsitePrecompile
{    
    $logFile = (join-path $TrunkPath "\build\output\logs\BackgroundCompile.log")
    "Build log file is $logFile"
        
    $msg = @"
Starting background compile of site. Use Get-Job to check progress. 
You may go on about your merry way but may want to leave the host open until complete.
"@
    
    Write-Output $msg
    $job = Start-Job -InputObject $TrunkPath -Name MyAppPageCompile -ScriptBlock {
        # doesn't appear transcription is supported here
        $trunk = $input

        Set-Alias aspnetcompile $env:windir\Microsoft.NET\Framework\v4.0.30319\aspnet_compiler.exe
        
        # see website solution file for these values
        $virtualPath = "/web"
        $physicalPath = (join-path $trunk "\web")
        $compilePath = $trunk + "\PrecompiledWeb\web"
        aspnetcompile -v $virtualPath -p $physicalPath -f -errorstack $compilePath        
    }
    
    # output details
    $job    
   
    Register-ObjectEvent $job -MessageData $logFile -EventName StateChanged `
        -SourceIdentifier Compile.JobStateChanged `
        -Action {            
            $logFile = $event.MessageData
            Set-Content -Force -Path $logFile `
            	-Value $(Receive-Job -Id $($Sender.Id) -Keep:$KeepJob | Out-String)
            #$eventSubscriber | UnregisterEvent
            Unregister-event -SourceIdentifier Compile.JobStateChanged
            $eventSubscriber.Action | Remove-Job
            Write-Host "Job # $($sender.Id) ($($sender.Name)) complete. Details at $logFile." 
        }
}

Some Notes

  • Everything inside the job's script block will be executed in another PowerShell process; anything from outside the script block that needs to be used inside must be passed into the script block with the InputObject parameter ($input). While this might be obvious it does mean potential refactoring considerations.
  • It didn't appear transcription was supported inside the script block which was disappointing.
  • I half expected that Start-Job would provide a parameter for a block to call when the job was complete. Register-ObjectEvent works but it's a bit verbose and isn't even mentioned in many posts talking about job management.
  • Like the job script block, the event handler action script block cannot refer to anything from the outside other than anything passed into the block with the MessageData parameter and automatic variables such as $event, $eventSubscriber, $sender, $sourceEventArgs, and $sourceArgs.
  • I went through some trial and error in getting the output from the job in the completed event. The code on line 36 and 37 worked fine but it was not the most obvious initial syntax.
  • There are a couple of ways to unregister events such as line 38, but I found that when I called the function again I received an error that the event was already subscribed to, so it was clear the unregistration was not working for some reason. The current code is working but similar code did not work previously. I dunno man, gremlins.
  • Event handler cleanup strikes me as a bit odd and this Register-TemporaryEvent script is worth a look.
I am tempted to refactor more of this developer build module process to use more background jobs to do more in parallel. However it is a bit tricky in that many of the functions are called both individually and chained together in driver functions and they need to work both in "foreground" and "background" modes. It would also mean a loss of rich progress reporting and things get more difficult in terms of debugging, output, code sharing, etc. Also, while multiple cores may help with parallel work, there's a law of diminishing returns to be considered as well as machine performance while attempting to do other work while PowerShell crunches away.
Monday
May142012

Build Automation Part 4: Database and Report Deployments

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

Unlike deploying the application itself in its entirety each time, database and report items have required incremental deployments due to their nature and sheer size. The question becomes how to manage these increments in coordination with the application bits to ensure everything is in sync.

I do not pretend to have all the answers here. I was hoping this part of the process could get overhauled more with our last release but there is only so much time in the day.

Some Challenges

Size, Dependencies, Impact Analysis

On the (Oracle) database side of our operational app we have around 800 tables, 500+ packages, 500+ views, 300+ functions and procedures, and a variety of other object types across 3 schemas. Running schema compares / diffs and doing sync scripts has been a time consuming pain in the past, regardless of various tools we have tried.

The situation is complicated by some 10 database environments, other app databases our app is dependent on, various apps depending on our database, and dynamic SQL embedded into report RDL files or coming from ORM's like Entity Framework. Dependency and impact analysis can be difficult, particularly across systems.

On the report side this app has over 600 SSRS reports scattered over 8 servers, several folders and different versions of SQL Server Reporting Services.

Source Control

Source control for reports has not been a problem so much with using SSRS, other than TFS generating a lot of unnecessary merge conflicts on those XML RDL files.

On the database side we have had some success using Team Coding in Toad with the TFS MSSCCI Provider. Quest came out with a native TFS provider but it did not support TFS work item association which ruled out our use of it.

The MSSCCI provider "works" with Toad for basic changes like packages, procedures, functions, triggers, views, etc. but boy it is not without a lot of quirks. So much so that for a while I saved any database object script changes to files on my desktop out of fear my changes would get overwritten, which used to happen quite a bit. The other problem is that not all changes are source controlled such as table schema changes, data changes etc. We offset that via some SQL file attachments to tasks tied to a project.



Pulling Report and Database Changes



I posted before about a custom TFS Artifact Manager (hereinafter "TAM") tool for pulling down report and database changeset files and task attachments either for a single work item or a large batch of related work items, such as all tasks linked through a series of scenarios, backlog items and bugs tied to a project. I won't repeat those details here but we currently still use the tool to gather up all the report and database changes for a given project release. It far from perfect but it beats manual guesswork and building sync scripts from database compares.

The TAM tool is also used to pull artifacts from various single task ad-hoc changes made outside of any official product release. Many reports can be added to the application dynamically through some dynamic report forms with common parameters; new records are added to the database and the reports are deployed outside of any app deployment. Likewise there are occasional database changes made independent of the application.

There are other tools mentioned in the TAM post that we may try using more in the future. Also, Troy Hunt has a good series of deployment posts including this one regarding Automated database releases with TeamCity and Red Gate. Doing exactly that with our schemas would make me a bit nervous but perhaps with tweaks and some experiments in the future.

Additionally I posted a review on Red Gate's Schema Compare for Oracle which can be quite a useful tool. We don't rely on it as heavily anymore with custom tools and processes but it is handy to use it to double-check things after deployments or for doing more one-off database syncs.

Deploying Reports

Ad hoc Report Deployments

When deploying from Development to Test, our CI process picks up any reports merged from $/MyApp/Reports/Dev/ to $/MyApp/Reports/Main/ and automatically deploys those via PowerShell and RS Utility. Any corresponding database changes are manually applied before that. Environments beyond Test currently require attaching report and database changes to a Help Desk ticket and routing to a DBA. The DBA runs any database changes and deploys the reports using the script mentioned in the next section.

App Report Deployments

For reports to be deployed with a given build of an application, the process is basically the same as the ad-hoc process in respect to going from Dev to Test. One difference is on timing of merging the report changes in source control to correspond with any dependent changes to the application code. When moving beyond the Test environment, all reports tied to a given project work item are pulled using the TAM tool. They are then deployed in mass using PowerShell and the SSRS web service (without RS Utility), in a manner similar to this post on Uploading SSRS Reports with PowerShell.



Deploying Database Changes

We currently do not have much automation around ad hoc database deployments but not much is needed there.

For app database changes we start by pulling the database changes using the TAM tool. In that tool a script "package" is built by choosing the scripts to include and specifying any needed order or execution. Previously someone (dev or DBA depending on environment) would either execute all those manually by hand in Toad, or build out an index/driver script and run that. It was not as bad as it might sound, given the tool produced combined SQL scripts for views, packages, procs, etc. Still it was tedious if there were a number of data or schema migration scripts to be run in order.

Executing Database Scripts With PowerShell

Our resident Oracle expert Jim Tilson ("The Optimizer") had the idea of creating a PowerShell script that used SQL*Plus to generically execute all database scripts in a given directory. I paired with him to get the basic interaction going but this is his brainchild and work. He should probably be the one explaining this but no telling when that slacker will get around to blogging :). If you have any interest in Oracle, SQL optimization, database tech in general, or Ruby, you should reach out to him on Twitter and ask him to blog more (and tweet more while he is at it). At any rate this might be useful for others so I will post the code and attempt to explain it.

Structure

The script expects all the database script files to be located in subfolders where the script resides, one folder per schema name, and no subfolders with each schema folder (not recursive).


Each file in a schema folder will be executed regardless of filename extension. Ordering is based on filename; our TAM tool prefixes a numeric wart on each file to ensure an obvious order. At the moment the script does not explicitly specify a name ordering but that's the default.


Executing Scripts for Each Schema

At the bottom of the script, the server TNS name is prompted for and a couple things are set before running the main run-scripts function.
set-location (Get-ScriptDirectory)
$dateWart = Get-DateWart
$server = read-host "Enter the server TNS name"
run-scripts
echo "Successfully ran all scripts."
Run-Scripts invokes a function to run the scripts for each schema, passing along the user and server info. This could be made more generic by assuming any subfolder where the PowerShell script resides represents a schema with database scripts to run.
function Run-Scripts
{
    foreach ($user in @("USER_SCHEMA_1", "USER_SCHEMA_2", "USER_SCHEMA_3")) 
	{ 
		run-scriptsforschema -server $server -user $user
	}
}

Running Scripts for a Schema

This function will temporarily set location to the schema subfolder corresponding to the user/schema name passed in. It prompts for a password for a later connection to that schema. Finally it enumerates all files in the schema folder, calls a Run-Script function to execute each, and writes progress as it goes.

Each successfully executed file is moved into a Completed subfolder. That file move is important as many migration scripts are written assuming they will only be run once and we have been bit by DBA's accidentally running scripts more than once.
function Run-ScriptsForSchema($user)
{
    echo "Running scripts for $user."
    push-location    
    set-location $user    
	$password = get-password($user)    
    ensure-directory(".\Completed")

    $files = @(get-childitem | where {!$_.PsIsContainer})
	$count = 0
	
    foreach ($fileInfo in $files)
    {        
        write-progress -activity "Running scripts for $user" `
			-currentoperation $fileinfo.name -status ("Executing") `
			-PercentComplete (100*$count/$files.count)
        Run-Script $user $password $fileInfo
        $count++
        write-progress -activity "Running scripts for $user" `
			-currentoperation $fileinfo.name -status ("Done") `
			-PercentComplete (100*$count/$files.count)
        move-item -path $fileInfo.fullname -destination ".\Completed" -Force
    }
    
    write-progress -activity "Running scripts for $user" -status ("Complete") -completed
    pop-location    
    echo "Completed scripts for $user"
}

Running a Script

The Run-Script function takes care of some logging and calls a normalize function to tack on additional SQL before and after the SQL contained in the file (error handling, commits, etc.); more on that in a moment. Function notes follow.
  • Path - the sqlplus location should be in the SYSTEM PATH environment variable so fully qualifying it should not be needed. In my case the location is C:\app\[username]\product\11.2.0\client_1\.
  • -L Parameter - instructs the app to only attempt to log on once; otherwise w/bad credentials it can get hung awaiting input.
  • -M Parameter - indicates HTML output is desired from sqlplus.
  • Credentials - The server name captured earlier is passed in along with the user/schema and password parameter values.
  • SQL - the normalize script function returns a SQL string and that is piped into sqlplus to be executed.
  • Output - the output is sent to $logFile and the 2>$1 sends standard error to standard output.
  • Error checking - Finally $LASTEXITCODE is checked to see what sqlplus.exe exited with; if 0 it was successful, otherwise it is the Oracle error number. The process stops on any error; manual changes might be needed to address any problems then the script can be run again.
function Run-Script($user, $password, $fileInfo)
{
    $logDir = ("..\Logs\{0}\{1}" -f $dateWart, $user)
    ensure-directory $logDir
    $logFile = join-path $logDir ($fileInfo.basename + ".html")
    
    (normalize-script $fileinfo.fullname) | sqlplus.exe -L -M "HTML ON SPOOL ON" `
		-S "$user/""$password""@$server" >> $logfile 2>$1

    $lec = $LASTEXITCODE
    
    if ($lec -ne 0)
    {
        write-error ("ERROR executing {0}!" -f $fileInfo.FullName)
        exit
    }
}

Adjusting the SQL

There are two critical adjustments made to the SQL read from the database script files to execute. The first is detecting a SQL error and exiting SQL*PLus with the error code. The other is issuing a commit at the end; most of our data related scripts do not include a commit as often they are ran and verified before issuing a commit. It is worth reading over the WHENEVER SQLERROR documentation as some types of errors will not trigger an exit; fully checking for all types of errors might require something more brittle like scanning the log files for certain error phrases.
function normalize-script($filename)
{
@"
    whenever sqlerror exit sql.sqlcode
    set echo off
    set termout off
    $([string]::join("`n", (get-content $fileinfo.fullname -readcount 0)))
    commit;
    exit
"@
}

Helper Functions

At the top of the script are some helper functions and an interop services dll is loaded for later use in translating the secure password to a plain text string to be passed along to SQL*Plus.
[Reflection.Assembly]::LoadWithPartialName("System.Runtime.InteropServices")

function Get-ScriptDirectory
{
    Split-Path ((Get-Variable MyInvocation -scope script).Value.MyCommand.Path)
}

function ensure-directory($dir)
{
    if (!(test-path $dir)) { new-item $dir -type directory }
}

function Get-DateWart()
{
    get-date -uformat "%Y %m %d %H %M %S"
}

function get-password($user)
{
    $enterpassword = read-host -AsSecureString "Password for $user@$server"
    [System.Runtime.InteropServices.Marshal]::PtrToStringAuto(`
		[System.Runtime.InteropServices.Marshal]::SecureStringToBSTR($enterpassword));
}

Analyzing the Results

The script stores logs under a Logs\[Timestamp]\Schema\ folder for troubleshooting and verification purposes.


SQL*Plus Alternatives

One alternative to SQL*Plus is using OracleCommand's ExecuteNonQuery method in Oracle.DataAccess.dll. I tried this approach back when I created an OracleScriptExecutor utility app that was designed to easily run SQL Scripts against multiple schemas. It was a bit of a nightmare that I do not recommend. For one you have to deal with annoyances like linefeed issues, semicolon and BEGIN/END block issues, and it is quite difficult to deal with multiple scripts combined in one SQL file (i.e. '/' delimited). It almost requires a full blown SQL parsing engine to handle it correctly so I'd rather delegate that pain to a tool like SQL*Plus that already handles such complexity.

In Conclusion

This really only scratches the surface of the problems and solutions in dealing with database and report deployments. With some time and TLC I am sure this beast could be further tamed. Thoughts, suggestions, tips, helpful tools, processes? Leave a comment!

I am running out of time in this series but hopefully I can touch on some CI details with TeamCity next.
Wednesday
May092012

Build Automation Part 3: App Deployment Script

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

Types of Deployment Scripts

We currently have different deployment scripts that are run independently though there is a desire to chain them together in automated fashion when time allows.
  • Application - Deploys the application bits
  • Database - Deploys the database script changes for a given project release
  • Reports - Deploys SSRS report changes for a given project release
The database and report artifact deployments are deserving of a dedicated post to this in the future. This post will focus on the application deployment script.

Deployment Script Skeleton

Looking at the script in a top-down fashion is best I think. Some of the helper functions being called will be listed later.

Top

param (
    [string]$DeployToServer = "",
    [switch]$SkipBackup = $false,
    [switch]$automated = $false
) 

$global:ErrorActionPreference = "Stop"

# error on uninitialized variables, non-existent properties, bad function calls
Set-StrictMode -version 2
At the top of the script some script preferences are set and the following parameters are defined:
  • DeployToServer - Name of the target server to deploy to. The idea is this would bypass a prompt where the user chose the server to deploy to. This would generally be set by a CI process. At the moment this isn't being used in the script as our build server went down in flames before I could make use of this.
  • SkipBackup - Allows turning off default behavior of backing up the existing app installation on the target deployment server.
  • Automated - This was meant to control whether the script ever paused for any user-interaction or not. It is used in a couple of places but not fully implemented. It could be replaced by use of DeployToServer but I wanted to be explicit. Typically this would just be set from a CI build.

Bottom

After all the functions in the deployment script, the following script level variables are defined. I went with an $_ naming convention for script level variables; $script:variableName may have been better but more verbose.
$_scriptPath = (get-scriptdirectory)

# import generic deployment script functions
. (join-path $_scriptPath "Deploy-Common.ps1")

$_packagePath = $_scriptPath # for now anyway, just being explicit here
$_transcriptFile = (join-path $_scriptPath "DeployTranscript.txt")

$_targetServer = ""
$_targetClickOnceDir = ""
$_targetSatelliteDir = ""
$_targetServicesDir = ""
$_targetRoot = ""
$_envName = ""

$_activity = "Initializing"
$_zipFileObject = $null

$_successfulDeploy = $false

$_scriptErrorMessage = ""
$_clickOnceUrl = ""
$_deployTS = $null
At the very bottom is the call to the main Publish-App function with error handling, invoking the product web page if successful, and transcript logging and emailing.
try 
{
    Init
    Publish-App
    
    if (!$script:automated)
    {
        $_clickOnceUrl = ("http://{0}.domain.com/MyApp/" -f $_targetServer)
        "Launching app landing page at $_clickOnceUrl"
        Invoke-InternetExplorer $_clickOnceUrl
    }
    
    $_successfulDeploy = $true
}
catch [System.Exception]
{
    $_scriptErrorMessage = ("Deployment failed with error: {0}{1}{1}Script: {2}  Line,Col: {3},{4}" `
		-f $_.Exception.ToString(), [System.Environment]::NewLine,  $_.InvocationInfo.ScriptName, `
		$_.InvocationInfo.ScriptLineNumber, $_.InvocationInfo.OffsetInLine)
    Write-Warning $_scriptErrorMessage    
}
finally
{
    Write-Progress "Done" "Done" -completed
    
    if (Get-CanTranscribe)
    {
        Stop-Transcript
        Send-Notification
    }
}

if (!$_successfulDeploy) { invoke-item  $_transcriptFile }

# if not an automated deploy then pause
if (!$script:automated)
{
    "`nPress enter to continue..."
    [void][System.Console]::ReadLine()
}

Initialization

The Init function kicks off transcription, reads in build version information from a file (See Initialization in Part 1), tacks on to the PATH environment variable, and sets up the PowerShell UI shell.
function Init
{
    #setting $ErrorActionPreference here doesn't appear to effect anything
    try { Stop-Transcript | out-null } catch { }    
    
    if (Get-CanTranscribe) { Start-Transcript -path $_transcriptFile }

    $buildInfo = Get-BuildInfo
    $env:path += ";$env:windir\Microsoft.NET\Framework\v4.0.30319;$env:windir\System32"
   
    # customize window shell
    if ($Host -and $Host.UI -and $Host.UI.RawUI)
    {
        try 
        {
            $ui = (Get-Host).UI.RawUI                
            $ui.WindowTitle = ("Deploy MyApp {0} ({1})" `
				-f $buildInfo.AppVersion, $buildInfo.FileVersion)
            
            if (!(Get-IsHostISE))
            {
                $ui.BackgroundColor = "DarkBlue"
                $ui.ForegroundColor = "White"
                $bufferSize = $ui.BufferSize
                $bufferSize.Width = 120
                $bufferSize.Height = 9000
                $ui.BufferSize = $bufferSize
                $winSize = $ui.WindowSize    
                $winSize.Width = 120
                $winSize.Height = 70
                $ui.WindowSize = $winSize
            }
        }
        catch [System.Exception]
        {
            ("Error configuring host UI: {0}" -f $_.Exception.Message)
        }
    }
}

Deployment Target Menu

This function is intended for user-interactive execution of the deployment; in our case this was mostly for production-level environments where another group performed the deployment. However it also came in handy in some other scenarios where we could not do completely automated continuous deployment, such as a period where our CI server was all hosed up. Additionally there are times where it is handy to push a custom build from anywhere to anywhere, outside of the normal process flow.

For automated CI deployments a target server name would be passed in and this user menu interaction would be skipped. This function displays a menu of valid target environments to deploy to, sets the target server accordingly, and invokes a deployment function with that target server name.

For the menu I originally tried just using $Host.ui.PromptForChoice as in this article by James O'Neill. However I did not like how that laid out; everything was rendered horizontally instead of vertically and some things ran together or were not spaced to my liking. That lead me to this post by Jeff Hicks which my menu is based on; the Show-Menu function in the switch statement below is his function which I did not modify.
function Publish-App
{
    $menu = " `
    1 Development `
    2 Iteration `
    3 Test `
    4 Pre-release Training `
    5 Production `
    6 Post-release Training `
    `    
    C Cancel `
`    
Your choice "
   
    $targetServer = ""
    $script:_envName = ""
    $buildInfo = Get-BuildInfo
	$title = ("Select Target Destination for MyApp {0} ({1})" `
		-f $buildInfo.AppVersion, $buildInfo.FileVersion)

    # Keep looping and running the menu until the user selects a valid item or cancels.
    Do
    {
        switch (Show-Menu $menu $title -clear)
        {
           "1" { $targetServer = "dev-server"; $script:_envName = "Development"; }
           "2" { $targetServer = "itr-server"; $script:_envName = "Iteration"; }
           "3" { $targetServer = "tst-server"; $script:_envName = "Test"; }
           "4" { $targetServer = "pre-server"; $script:_envName = "Pre-release"; }
           "5" { $targetServer = "prod-server"; $script:_envName = "Production"; }
           "6" { $targetServer = "post-server"; $script:_envName = "Post-release"; }
           "C" { Write-Output "Cancel"; return; }           
        }
    
    } While (!$script:_envName -and !$targetServer)
    
    if ($targetServer -and $script:_envName)
    {    
        $choiceYes = New-Object System.Management.Automation.Host.ChoiceDescription "&Yes", "Answer Yes."
    	$choiceNo = New-Object System.Management.Automation.Host.ChoiceDescription "&No", "Answer No."
    	$options = [System.Management.Automation.Host.ChoiceDescription[]]($choiceYes, $choiceNo)
    	$result = $host.ui.PromptForChoice("Confirm deployment target", `
			"Deploy My App to $_envName ($targetServer)?", $options, 0)
        
        if ($result -eq 0)
        {
            Publish-ToServer $targetServer -skipBackup:$script:skipBackup
        }
        else 
        {
            Write-Output "Deployment cancelled"
        }
    }
}



Server Deployment Driver Function

This function sets some script level variables such as common folder locations, and calls functions to perform cleanup, backup any existing installation (see this post), and deploy ClickOnce, satellite and service files.
function Publish-ToServer (
    [string] $targetServer = $(throw "targetServer is required"),
    [switch] $skipBackup = $false )
{
    $startTime = [DateTime]::Now
    Write-Output "`n"
    Set-Activity "Deploying to $targetServer"
    Write-Log "Beginning deploy to target server $targetServer"
    $script:_targetServer = $targetServer
    $script:_targetRoot = "\\$targetServer\Share$"
    $script:_targetClickOnceDir = "\\$targetServer\Share$\MyApp\ClickOnce"
    $script:_targetSatelliteDir = "\\$targetServer\Share$\MyApp\Satellite"
    $script:_targetServicesDir = "\\$targetServer\Share$\MyApp\Services"
    
    Clear-OldFiles
    if (!$skipBackup) { Backup-ExistingInstall }
    
    Publish-ClickOnceFiles
    Publish-SatelliteFiles
    Publish-Service
    
    $script:_deployTS = [DateTime]::Now - $startTime   
    Write-Log ("Published to $targetServer in {0:N0} minute(s) and {1} second(s)`n" `
		-f [math]::floor($_deployTS.TotalMinutes), $_deployTS.Seconds)
}

Cleanup Before Backup

Before backing up the existing target folders, the script does some cleanup to remove some files and folders that are not desirable for backup. Mostly this cleanup is around removing old ClickOnce folders; because of the version naming convention there will quickly be a large number of folders.



The below function will look for folders ordered by modified time descending and select the first one to determine the most recent ClickOnce folder (you could argue some holes with that logic). It will then keep only that one, getting rid of all the others. In this way it will only be backing up the last ClickOnce folder. If you are starting fresh this is not a problem per se but in my case there were many existing versions out there already.
function Clear-OldFiles
{
    Set-Activity "Cleaning up old ClickOnce versions"
	Clear-OldClickOnceAppFiles $_targetClickOnceDir	
	# additional cleanup here removed...
}

function Clear-OldClickOnceAppFiles ($rootDir)
{
    if (!(Test-Path $rootDir))
    {
        Write-Log "$rootDir doesn't exist; nothing to do"
        return;
    }

    # exclude on subdirectory names doesn't seem to work
	# so we'll just rename dir, grab most recent child, copy over    
    # http://tinyurl.com/copy-item-exclude
    $appFilesDir = (join-path $rootDir "\Application Files")
    
    if (!(Test-Path $appFilesDir))
    {
        Write-Log "Didn't find Application Files folder at $appFilesDir; nothing to do"
        return
    }
    
    Write-Log ("Removing old ClickOnce app files beyond one version back from {0}" `
		-f $appFilesDir)
    $folders = @(Get-ChildItem -Path $appFilesDir -recurse `
		| Where-Object {$_.PSIsContainer})
        
    if ($folders.Length -le 1)
    {
        Write-Log ("No old versions to remove (folder count was {0}); exiting" `
			-f ($folders.Length))
        return
    }
    else 
    {
        Write-Log ("Found {0} ClickOnce version folder(s)" -f ($folders.Length))
    }
    
    Write-Log "Renaming $appFilesDir to Application Files Temp"
    Rename-Item $appFilesDir "Application Files Temp"    
    
    Write-Log "Determining most recent ClickOnce app files subfolder"
    $appFilesTempDir = (join-path $rootDir "\Application Files Temp")    
    $mostRecentAppFilesDir = Get-ChildItem -Path $appFilesTempDir `
		| Where-Object {$_.PSIsContainer} `
		| Sort-Object LastWriteTime -Descending | Select-Object -First 1
    Write-Log "Most recent app files dir is $mostRecentAppFilesDir"        
    
    New-Item $appFilesDir -type directory
    Write-Log ("Copying {0} to $appFilesDir" -f ($mostRecentAppFilesDir.FullName))
    copy-item -recurse $mostRecentAppFilesDir.FullName $appFilesDir    
    
    $folderCount = ((Get-ChildItem -Path $appFilesTempDir `
		| Where-Object {$_.PSIsContainer})).Length
    Write-Log ("Removing {0} old version(s)" -f ($folderCount-1))
    Remove-Dir $appFilesTempDir
    
    Write-Log "Old ClickOnce app files removed"
}

Deploying Satellite Files

Because some of the client satellite files are loaded directly off the network by this app, some files will be locked if users are running the app. This function first calls a helper function that uses PowerShell remoting to disconnect file sessions to a remote server. I'm not a fan of loading assemblies directly off a network share; I think syncing the client files with the server and then loading on the client is better but it is what it is.

The function then deletes all the files in the target Satellite directory minus the config file. At the moment the deployment script is not updating configuration files though that was the plan in the beginning.

There is also a hackish sleep call between deleting files and copying as there were sporadic access denied errors of a locking / timing nature. Finally a call is made to a copy helper function that has support for additional attempts on error as well as outputting the results of what was copied.
function Publish-SatelliteFiles
{
    Set-Activity "Deploying Staging files"    
    Disconnect-FileSessions $_targetServer   
           
    # exclude deleting config - currently configured by hand until it can be automated
    Remove-RootFilesInDir $_targetSatelliteDir -exclude "MyApp.Client.exe.config"
    
    "Pausing between delete and copy"
    Start-Sleep -s 3
    
    Copy-Files -from "$_packagePath\Satellite\**" -dest $_targetSatelliteDir `
		-recurse -attempts 2
}

Deploying ClickOnce Files

The ClickOnce deployment is similiar.
function Publish-ClickOnceFiles
{
    Set-Activity "Deploying ClickOnce files"
    Disconnect-FileSessions $_targetServer
    
    Remove-Dir (join-path $_targetClickOnceDir "Application Files")    
    Remove-RootFilesInDir $_targetClickOnceDir
    
    "Pausing between delete and copy"
    Start-Sleep -s 3
    
    Copy-Files -from "$_packagePath\ClickOnce\**" -dest $_targetClickOnceDir `
		-recurse -attempts 2
    Copy-Files -from "$_packagePath\BuildInfo.csv" -dest $_targetClickOnceDir
}

Deploying Service Files

I posted Install a Windows Service Remotely with PowerShell a while back so refer to it for additional details such as the functions Uninstall-Service, Install-Service and Start-Service.
function Publish-Service
{
    Set-Activity "Deploying Service files"
    $serviceName = "MyAppDataService"
    
    Write-Log "Stopping, uninstalling service $serviceName on $_targetServer"
    Uninstall-Service $serviceName $_targetServer
    "Pausing to ensure files are not locked during delete..."
    Start-Sleep -s 5 # Yeah I know, don't beat me up over this
    
    Remove-RootFilesInDir $_targetServicesDir    
   
    Copy-Files "$_packagePath\Services\**" $_targetServicesDir -recurse
    New-RestartServiceCommand
    
    Install-Service `
    -ServiceName $serviceName `
    -TargetServer $_targetServer `
    -DisplayName "MyApp Data Service" `
    -PhysicalPath "D:\Apps\MyApp\Services\MyApp.DataService.exe" `
    -Username "NT AUTHORITY\NetworkService" `
    -Description "Provides remote TCP/IP communication between the MyApp client application and the database tier."
        
    Start-Service $serviceName $_targetServer
}
The New-RestartServiceCommand function creates a batch file that restarts the Windows service. On each target server there is a scheduled task that invokes this batch file daily late at night. Originally that was done to help ensure any memory and network resources were properly released in the event of unexpected issues. The scheduled task is currently a one-time manual setup process though creating it could certainly be automated as well.
function New-RestartServiceCommand
{    
    $file = (join-path $_targetServicesDir "MyAppServiceRestart.bat")
    "Creating $file for the nightly scheduled task to restart the service"
    if (Test-Path $file) { Remove-Item -Force $file }
    Add-Content $file "REM This is for automatically restarting the MyApp data service via a nightly scheduled task"
    Add-Content $file "net stop `"MyApp Data Service`""
    Add-Content $file "net start `"MyApp Data Service`""
}

Some Common Helper Functions

Some of the common helper functions used are below (functions detailed in other referenced posts are omitted).

File I/O

function Copy-Files([string]$from, [string]$dest, [switch]$recurse, [int]$attempts = 1)
{
    "Copying $from to $dest with recurse $recurse" 
    $result = $null
    for ($i=1; $i -le $attempts; $i++)
    {
        try 
        {
            $result = Copy-Item -Recurse:$recurse -Force -PassThru $from `
				-Destination $dest
            break
        }
        catch [System.Exception]
        {
            if ($i -lt $attempts)
            {
                ("Copy failed: '{0}'. Pausing. Max attempts: {1}, Attempts: {2}" `
					-f $_.Exception.Message, $attempts, $i)
                Start-Sleep -s 3
            }
            else { throw }
        }
    }
    
    if ($result) {foreach ($i in $result) {("Copied {0}" -f $i.FullName)}}
}

function Remove-Dir([string]$path)
{
    if (Test-Path $path)
    {
        Write-Output "Removing folder '$path'"
        Remove-Item -recurse -force $path
    }
}

function Remove-RootFilesInDir([string]$path, [string]$pattern = "*.*", `
	[string[]]$exclude)
{
    $deleteWhat = (join-path $path $pattern)
    "Removing $deleteWhat"
    remove-item -Force $deleteWhat -Exclude $exclude
}

PowerShell Host Related

function Get-CanTranscribe
{
    # probably not the best way to answer this question but will at least rule out ISE
    return (!(Get-IsHostISE))
}

function Get-IsHostISE
{
    return ((Get-Host).Name -eq "Windows PowerShell ISE Host")
}

function get-scriptdirectory 
{ 
    if (Test-Path variable:\hostinvocation) 
    {
        $FullPath=$hostinvocation.MyCommand.Path
    }
    else 
    {
        $FullPath=(get-variable myinvocation -scope script).value.Mycommand.Definition
    }
    if (Test-Path $FullPath)
    { 
        return (Split-Path $FullPath) 
    }
    else
    { 
        $FullPath=(Get-Location).path
        Write-Warning ("Get-ScriptDirectory: Powershell Host <" + $Host.name `
			+ "> may not be compatible with this function, the current directory <" `
			+ $FullPath + "> will be used.")
        return $FullPath
    }
}

Miscellaneous

# note that net session \\computername /delete won't work w/remote deployment
#     NET SESSION displays incoming connections only.
#     In other words it must be run on the machine that is acting as the server.
# Enabling PS Remoting: http://technet.microsoft.com/en-us/magazine/ff700227.aspx
# 1) On target server ensure that winrm service is running
#    In PowerShell: get-service winrm
#
# 2) Enable PS remoting on the target server
#    Enable-PSRemoting –force
function Disconnect-FileSessions ([string]$server = $(throw "server is required"))
{
    "Disconnecting file sessions to $server"    
    $S=NEW-PSSESSION –computername $server
    INVOKE-COMMAND –Session $s –scriptblock { (NET SESSION /delete /y) }
    REMOVE-PSSESSION $S
}

function Invoke-InternetExplorer([string]$url)
{
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($url)
    $IE.visible=$true
}

function Send-Email($from, $to, $subject, $body, $smtpServer = "mail.domain.com", `
	$attachment = $null, $isHtmlBody = $true)
{
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)    
    $msg = new-object Net.Mail.MailMessage
    $msg.From = $from
    $msg.To.Add($to)
    $msg.Subject = $subject
    $msg.IsBodyHtml = $isHtmlBody
    $msg.Body = $body
    
    if ($attachment)
    {
        $att = new-object Net.Mail.Attachment($attachment)
        $msg.Attachments.Add($att)
    }
    
    $smtp.Send($msg)
    $att.Dispose | out-null
}

Other Functions

Other functions that are not as generic/common in nature but are included in the main script follow.

Retrieving Build Information

The build info file discussed previously in this series is packaged in the same directory as the script and read for displaying in the PowerShell console and in sending a deployment notification.
function Get-BuildInfo
{    
	$buildInfoFile = (join-path (scriptdirectory) "BuildInfo.csv")
	return Import-Csv $buildInfoFile
}
A sample of this file:
"AppVersion","FileVersion","BuiltOn","ClickOnceRevision","ClickOncePublishVersion"
"3.3.0.17","2012.05.07.1008","5/7/2012 10:08:53 AM","117","3.3.0.117"

Logging, Diagnostics and Progress

function Set-Activity([string]$activity)
{
    $script:_activity = $activity
    Write-Log "Current Activity: $_activity"
}

function Write-Log ([string]$message)
{
    write-output $message
    write-progress -activity $_activity -status $message
}

Backup Functions

Functions such as Backup-ExistingInstall, Backup-Dir, and Compress-Files are included in Compression Experiments In the Build and Deployment Process.

Deployment Email

function Send-Notification
{
    $buildInfo = Get-BuildInfo
    $env = $script:_envName
    $appVer = $buildInfo.AppVersion
    $fileVer = $buildInfo.FileVersion
    $publishVer = $buildInfo.ClickOncePublishVersion
    $builtAt = $buildInfo.BuiltOn
    
    $deployText = "deployed"
    if (!$_successfulDeploy) {$deployText = "deployment failed"}
    
    $subject = "MyApp v {0} {1} to {2} ({3})" -f $buildInfo.AppVersion, `
		$deployText, $script:_envName, $_targetServer
    $deployedFrom = [Environment]::MachineName
    $deployedBy = [Environment]::UserName
    $deployedAt = [DateTime]::Now.ToString("G")
    
    $successOrFail = ""
    
    if ($_successfulDeploy) { $successOrFail = "Successful: True" }
    else 
    {
        $successOrFail = "Successful: False`n`n" + "Error: " + $_scriptErrorMessage + "`n"
    }
    
    $deployTime = ""
    if ($_deployTS)
    {
        $deployTime = ("Deployment completed in {0:N0} minute(s) and {1} second(s)`n" `
			-f [math]::floor($_deployTS.TotalMinutes), $_deployTS.Seconds)
    }
    
    $br = "
" $body = @" MyApp deployment results follow.$br$br $successOrFail$br$br Environment: $env ($_targetServer)$br Run Webpage: $_clickOnceUrl$br$br App Version: $appVer$br Publish Version: $publishVer$br File Version: $fileVer$br$br Built At: $builtAt$br$br Deployed from $deployedFrom by $deployedBy. Deployment details are attached.$br $deployTime$br$br This message was sent by an automated process. "@ $to = "PRODUCT_SUPPORT@domain.com" Send-Email -from "$deployedBy@domain.com" -to $to ` -subject $subject -body $body -attachment $_transcriptFile }

In Conclusion

That wraps up the initial version of this deployment script. Potential changes going forward are:
  • Adjustments to re-integrate this with CI (build server is currently down)
  • Updating app config files from the script
  • Automating creation of a scheduled task to restart the service
  • Script refactoring and cleanup
  • Kicking off database and/or report script deployments from app script
  • "Down for maintenance" page for users
  • Dependent apps - another web app uses some business and data components of this app and it should be updated when this app is deployed

Potential future posts may be added soon on database and report deployment scripts and CI setup.