Recent Tweets

Entries in deployments (5)


Build Automation Part 4: Database and Report Deployments

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

Unlike deploying the application itself in its entirety each time, database and report items have required incremental deployments due to their nature and sheer size. The question becomes how to manage these increments in coordination with the application bits to ensure everything is in sync.

I do not pretend to have all the answers here. I was hoping this part of the process could get overhauled more with our last release but there is only so much time in the day.

Some Challenges

Size, Dependencies, Impact Analysis

On the (Oracle) database side of our operational app we have around 800 tables, 500+ packages, 500+ views, 300+ functions and procedures, and a variety of other object types across 3 schemas. Running schema compares / diffs and doing sync scripts has been a time consuming pain in the past, regardless of various tools we have tried.

The situation is complicated by some 10 database environments, other app databases our app is dependent on, various apps depending on our database, and dynamic SQL embedded into report RDL files or coming from ORM's like Entity Framework. Dependency and impact analysis can be difficult, particularly across systems.

On the report side this app has over 600 SSRS reports scattered over 8 servers, several folders and different versions of SQL Server Reporting Services.

Source Control

Source control for reports has not been a problem so much with using SSRS, other than TFS generating a lot of unnecessary merge conflicts on those XML RDL files.

On the database side we have had some success using Team Coding in Toad with the TFS MSSCCI Provider. Quest came out with a native TFS provider but it did not support TFS work item association which ruled out our use of it.

The MSSCCI provider "works" with Toad for basic changes like packages, procedures, functions, triggers, views, etc. but boy it is not without a lot of quirks. So much so that for a while I saved any database object script changes to files on my desktop out of fear my changes would get overwritten, which used to happen quite a bit. The other problem is that not all changes are source controlled such as table schema changes, data changes etc. We offset that via some SQL file attachments to tasks tied to a project.

Pulling Report and Database Changes

I posted before about a custom TFS Artifact Manager (hereinafter "TAM") tool for pulling down report and database changeset files and task attachments either for a single work item or a large batch of related work items, such as all tasks linked through a series of scenarios, backlog items and bugs tied to a project. I won't repeat those details here but we currently still use the tool to gather up all the report and database changes for a given project release. It far from perfect but it beats manual guesswork and building sync scripts from database compares.

The TAM tool is also used to pull artifacts from various single task ad-hoc changes made outside of any official product release. Many reports can be added to the application dynamically through some dynamic report forms with common parameters; new records are added to the database and the reports are deployed outside of any app deployment. Likewise there are occasional database changes made independent of the application.

There are other tools mentioned in the TAM post that we may try using more in the future. Also, Troy Hunt has a good series of deployment posts including this one regarding Automated database releases with TeamCity and Red Gate. Doing exactly that with our schemas would make me a bit nervous but perhaps with tweaks and some experiments in the future.

Additionally I posted a review on Red Gate's Schema Compare for Oracle which can be quite a useful tool. We don't rely on it as heavily anymore with custom tools and processes but it is handy to use it to double-check things after deployments or for doing more one-off database syncs.

Deploying Reports

Ad hoc Report Deployments

When deploying from Development to Test, our CI process picks up any reports merged from $/MyApp/Reports/Dev/ to $/MyApp/Reports/Main/ and automatically deploys those via PowerShell and RS Utility. Any corresponding database changes are manually applied before that. Environments beyond Test currently require attaching report and database changes to a Help Desk ticket and routing to a DBA. The DBA runs any database changes and deploys the reports using the script mentioned in the next section.

App Report Deployments

For reports to be deployed with a given build of an application, the process is basically the same as the ad-hoc process in respect to going from Dev to Test. One difference is on timing of merging the report changes in source control to correspond with any dependent changes to the application code. When moving beyond the Test environment, all reports tied to a given project work item are pulled using the TAM tool. They are then deployed in mass using PowerShell and the SSRS web service (without RS Utility), in a manner similar to this post on Uploading SSRS Reports with PowerShell.

Deploying Database Changes

We currently do not have much automation around ad hoc database deployments but not much is needed there.

For app database changes we start by pulling the database changes using the TAM tool. In that tool a script "package" is built by choosing the scripts to include and specifying any needed order or execution. Previously someone (dev or DBA depending on environment) would either execute all those manually by hand in Toad, or build out an index/driver script and run that. It was not as bad as it might sound, given the tool produced combined SQL scripts for views, packages, procs, etc. Still it was tedious if there were a number of data or schema migration scripts to be run in order.

Executing Database Scripts With PowerShell

Our resident Oracle expert Jim Tilson ("The Optimizer") had the idea of creating a PowerShell script that used SQL*Plus to generically execute all database scripts in a given directory. I paired with him to get the basic interaction going but this is his brainchild and work. He should probably be the one explaining this but no telling when that slacker will get around to blogging :). If you have any interest in Oracle, SQL optimization, database tech in general, or Ruby, you should reach out to him on Twitter and ask him to blog more (and tweet more while he is at it). At any rate this might be useful for others so I will post the code and attempt to explain it.


The script expects all the database script files to be located in subfolders where the script resides, one folder per schema name, and no subfolders with each schema folder (not recursive).

Each file in a schema folder will be executed regardless of filename extension. Ordering is based on filename; our TAM tool prefixes a numeric wart on each file to ensure an obvious order. At the moment the script does not explicitly specify a name ordering but that's the default.

Executing Scripts for Each Schema

At the bottom of the script, the server TNS name is prompted for and a couple things are set before running the main run-scripts function.
set-location (Get-ScriptDirectory)
$dateWart = Get-DateWart
$server = read-host "Enter the server TNS name"
echo "Successfully ran all scripts."
Run-Scripts invokes a function to run the scripts for each schema, passing along the user and server info. This could be made more generic by assuming any subfolder where the PowerShell script resides represents a schema with database scripts to run.
function Run-Scripts
    foreach ($user in @("USER_SCHEMA_1", "USER_SCHEMA_2", "USER_SCHEMA_3")) 
		run-scriptsforschema -server $server -user $user

Running Scripts for a Schema

This function will temporarily set location to the schema subfolder corresponding to the user/schema name passed in. It prompts for a password for a later connection to that schema. Finally it enumerates all files in the schema folder, calls a Run-Script function to execute each, and writes progress as it goes.

Each successfully executed file is moved into a Completed subfolder. That file move is important as many migration scripts are written assuming they will only be run once and we have been bit by DBA's accidentally running scripts more than once.
function Run-ScriptsForSchema($user)
    echo "Running scripts for $user."
    set-location $user    
	$password = get-password($user)    

    $files = @(get-childitem | where {!$_.PsIsContainer})
	$count = 0
    foreach ($fileInfo in $files)
        write-progress -activity "Running scripts for $user" `
			-currentoperation $ -status ("Executing") `
			-PercentComplete (100*$count/$files.count)
        Run-Script $user $password $fileInfo
        write-progress -activity "Running scripts for $user" `
			-currentoperation $ -status ("Done") `
			-PercentComplete (100*$count/$files.count)
        move-item -path $fileInfo.fullname -destination ".\Completed" -Force
    write-progress -activity "Running scripts for $user" -status ("Complete") -completed
    echo "Completed scripts for $user"

Running a Script

The Run-Script function takes care of some logging and calls a normalize function to tack on additional SQL before and after the SQL contained in the file (error handling, commits, etc.); more on that in a moment. Function notes follow.
  • Path - the sqlplus location should be in the SYSTEM PATH environment variable so fully qualifying it should not be needed. In my case the location is C:\app\[username]\product\11.2.0\client_1\.
  • -L Parameter - instructs the app to only attempt to log on once; otherwise w/bad credentials it can get hung awaiting input.
  • -M Parameter - indicates HTML output is desired from sqlplus.
  • Credentials - The server name captured earlier is passed in along with the user/schema and password parameter values.
  • SQL - the normalize script function returns a SQL string and that is piped into sqlplus to be executed.
  • Output - the output is sent to $logFile and the 2>$1 sends standard error to standard output.
  • Error checking - Finally $LASTEXITCODE is checked to see what sqlplus.exe exited with; if 0 it was successful, otherwise it is the Oracle error number. The process stops on any error; manual changes might be needed to address any problems then the script can be run again.
function Run-Script($user, $password, $fileInfo)
    $logDir = ("..\Logs\{0}\{1}" -f $dateWart, $user)
    ensure-directory $logDir
    $logFile = join-path $logDir ($fileInfo.basename + ".html")
    (normalize-script $fileinfo.fullname) | sqlplus.exe -L -M "HTML ON SPOOL ON" `
		-S "$user/""$password""@$server" >> $logfile 2>$1

    $lec = $LASTEXITCODE
    if ($lec -ne 0)
        write-error ("ERROR executing {0}!" -f $fileInfo.FullName)

Adjusting the SQL

There are two critical adjustments made to the SQL read from the database script files to execute. The first is detecting a SQL error and exiting SQL*PLus with the error code. The other is issuing a commit at the end; most of our data related scripts do not include a commit as often they are ran and verified before issuing a commit. It is worth reading over the WHENEVER SQLERROR documentation as some types of errors will not trigger an exit; fully checking for all types of errors might require something more brittle like scanning the log files for certain error phrases.
function normalize-script($filename)
    whenever sqlerror exit sql.sqlcode
    set echo off
    set termout off
    $([string]::join("`n", (get-content $fileinfo.fullname -readcount 0)))

Helper Functions

At the top of the script are some helper functions and an interop services dll is loaded for later use in translating the secure password to a plain text string to be passed along to SQL*Plus.

function Get-ScriptDirectory
    Split-Path ((Get-Variable MyInvocation -scope script).Value.MyCommand.Path)

function ensure-directory($dir)
    if (!(test-path $dir)) { new-item $dir -type directory }

function Get-DateWart()
    get-date -uformat "%Y %m %d %H %M %S"

function get-password($user)
    $enterpassword = read-host -AsSecureString "Password for $user@$server"

Analyzing the Results

The script stores logs under a Logs\[Timestamp]\Schema\ folder for troubleshooting and verification purposes.

SQL*Plus Alternatives

One alternative to SQL*Plus is using OracleCommand's ExecuteNonQuery method in Oracle.DataAccess.dll. I tried this approach back when I created an OracleScriptExecutor utility app that was designed to easily run SQL Scripts against multiple schemas. It was a bit of a nightmare that I do not recommend. For one you have to deal with annoyances like linefeed issues, semicolon and BEGIN/END block issues, and it is quite difficult to deal with multiple scripts combined in one SQL file (i.e. '/' delimited). It almost requires a full blown SQL parsing engine to handle it correctly so I'd rather delegate that pain to a tool like SQL*Plus that already handles such complexity.

In Conclusion

This really only scratches the surface of the problems and solutions in dealing with database and report deployments. With some time and TLC I am sure this beast could be further tamed. Thoughts, suggestions, tips, helpful tools, processes? Leave a comment!

I am running out of time in this series but hopefully I can touch on some CI details with TeamCity next.

Build Automation Part 3: App Deployment Script

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

Types of Deployment Scripts

We currently have different deployment scripts that are run independently though there is a desire to chain them together in automated fashion when time allows.
  • Application - Deploys the application bits
  • Database - Deploys the database script changes for a given project release
  • Reports - Deploys SSRS report changes for a given project release
The database and report artifact deployments are deserving of a dedicated post to this in the future. This post will focus on the application deployment script.

Deployment Script Skeleton

Looking at the script in a top-down fashion is best I think. Some of the helper functions being called will be listed later.


param (
    [string]$DeployToServer = "",
    [switch]$SkipBackup = $false,
    [switch]$automated = $false

$global:ErrorActionPreference = "Stop"

# error on uninitialized variables, non-existent properties, bad function calls
Set-StrictMode -version 2
At the top of the script some script preferences are set and the following parameters are defined:
  • DeployToServer - Name of the target server to deploy to. The idea is this would bypass a prompt where the user chose the server to deploy to. This would generally be set by a CI process. At the moment this isn't being used in the script as our build server went down in flames before I could make use of this.
  • SkipBackup - Allows turning off default behavior of backing up the existing app installation on the target deployment server.
  • Automated - This was meant to control whether the script ever paused for any user-interaction or not. It is used in a couple of places but not fully implemented. It could be replaced by use of DeployToServer but I wanted to be explicit. Typically this would just be set from a CI build.


After all the functions in the deployment script, the following script level variables are defined. I went with an $_ naming convention for script level variables; $script:variableName may have been better but more verbose.
$_scriptPath = (get-scriptdirectory)

# import generic deployment script functions
. (join-path $_scriptPath "Deploy-Common.ps1")

$_packagePath = $_scriptPath # for now anyway, just being explicit here
$_transcriptFile = (join-path $_scriptPath "DeployTranscript.txt")

$_targetServer = ""
$_targetClickOnceDir = ""
$_targetSatelliteDir = ""
$_targetServicesDir = ""
$_targetRoot = ""
$_envName = ""

$_activity = "Initializing"
$_zipFileObject = $null

$_successfulDeploy = $false

$_scriptErrorMessage = ""
$_clickOnceUrl = ""
$_deployTS = $null
At the very bottom is the call to the main Publish-App function with error handling, invoking the product web page if successful, and transcript logging and emailing.
    if (!$script:automated)
        $_clickOnceUrl = ("http://{0}" -f $_targetServer)
        "Launching app landing page at $_clickOnceUrl"
        Invoke-InternetExplorer $_clickOnceUrl
    $_successfulDeploy = $true
catch [System.Exception]
    $_scriptErrorMessage = ("Deployment failed with error: {0}{1}{1}Script: {2}  Line,Col: {3},{4}" `
		-f $_.Exception.ToString(), [System.Environment]::NewLine,  $_.InvocationInfo.ScriptName, `
		$_.InvocationInfo.ScriptLineNumber, $_.InvocationInfo.OffsetInLine)
    Write-Warning $_scriptErrorMessage    
    Write-Progress "Done" "Done" -completed
    if (Get-CanTranscribe)

if (!$_successfulDeploy) { invoke-item  $_transcriptFile }

# if not an automated deploy then pause
if (!$script:automated)
    "`nPress enter to continue..."


The Init function kicks off transcription, reads in build version information from a file (See Initialization in Part 1), tacks on to the PATH environment variable, and sets up the PowerShell UI shell.
function Init
    #setting $ErrorActionPreference here doesn't appear to effect anything
    try { Stop-Transcript | out-null } catch { }    
    if (Get-CanTranscribe) { Start-Transcript -path $_transcriptFile }

    $buildInfo = Get-BuildInfo
    $env:path += ";$env:windir\Microsoft.NET\Framework\v4.0.30319;$env:windir\System32"
    # customize window shell
    if ($Host -and $Host.UI -and $Host.UI.RawUI)
            $ui = (Get-Host).UI.RawUI                
            $ui.WindowTitle = ("Deploy MyApp {0} ({1})" `
				-f $buildInfo.AppVersion, $buildInfo.FileVersion)
            if (!(Get-IsHostISE))
                $ui.BackgroundColor = "DarkBlue"
                $ui.ForegroundColor = "White"
                $bufferSize = $ui.BufferSize
                $bufferSize.Width = 120
                $bufferSize.Height = 9000
                $ui.BufferSize = $bufferSize
                $winSize = $ui.WindowSize    
                $winSize.Width = 120
                $winSize.Height = 70
                $ui.WindowSize = $winSize
        catch [System.Exception]
            ("Error configuring host UI: {0}" -f $_.Exception.Message)

Deployment Target Menu

This function is intended for user-interactive execution of the deployment; in our case this was mostly for production-level environments where another group performed the deployment. However it also came in handy in some other scenarios where we could not do completely automated continuous deployment, such as a period where our CI server was all hosed up. Additionally there are times where it is handy to push a custom build from anywhere to anywhere, outside of the normal process flow.

For automated CI deployments a target server name would be passed in and this user menu interaction would be skipped. This function displays a menu of valid target environments to deploy to, sets the target server accordingly, and invokes a deployment function with that target server name.

For the menu I originally tried just using $Host.ui.PromptForChoice as in this article by James O'Neill. However I did not like how that laid out; everything was rendered horizontally instead of vertically and some things ran together or were not spaced to my liking. That lead me to this post by Jeff Hicks which my menu is based on; the Show-Menu function in the switch statement below is his function which I did not modify.
function Publish-App
    $menu = " `
    1 Development `
    2 Iteration `
    3 Test `
    4 Pre-release Training `
    5 Production `
    6 Post-release Training `
    C Cancel `
Your choice "
    $targetServer = ""
    $script:_envName = ""
    $buildInfo = Get-BuildInfo
	$title = ("Select Target Destination for MyApp {0} ({1})" `
		-f $buildInfo.AppVersion, $buildInfo.FileVersion)

    # Keep looping and running the menu until the user selects a valid item or cancels.
        switch (Show-Menu $menu $title -clear)
           "1" { $targetServer = "dev-server"; $script:_envName = "Development"; }
           "2" { $targetServer = "itr-server"; $script:_envName = "Iteration"; }
           "3" { $targetServer = "tst-server"; $script:_envName = "Test"; }
           "4" { $targetServer = "pre-server"; $script:_envName = "Pre-release"; }
           "5" { $targetServer = "prod-server"; $script:_envName = "Production"; }
           "6" { $targetServer = "post-server"; $script:_envName = "Post-release"; }
           "C" { Write-Output "Cancel"; return; }           
    } While (!$script:_envName -and !$targetServer)
    if ($targetServer -and $script:_envName)
        $choiceYes = New-Object System.Management.Automation.Host.ChoiceDescription "&Yes", "Answer Yes."
    	$choiceNo = New-Object System.Management.Automation.Host.ChoiceDescription "&No", "Answer No."
    	$options = [System.Management.Automation.Host.ChoiceDescription[]]($choiceYes, $choiceNo)
    	$result = $host.ui.PromptForChoice("Confirm deployment target", `
			"Deploy My App to $_envName ($targetServer)?", $options, 0)
        if ($result -eq 0)
            Publish-ToServer $targetServer -skipBackup:$script:skipBackup
            Write-Output "Deployment cancelled"

Server Deployment Driver Function

This function sets some script level variables such as common folder locations, and calls functions to perform cleanup, backup any existing installation (see this post), and deploy ClickOnce, satellite and service files.
function Publish-ToServer (
    [string] $targetServer = $(throw "targetServer is required"),
    [switch] $skipBackup = $false )
    $startTime = [DateTime]::Now
    Write-Output "`n"
    Set-Activity "Deploying to $targetServer"
    Write-Log "Beginning deploy to target server $targetServer"
    $script:_targetServer = $targetServer
    $script:_targetRoot = "\\$targetServer\Share$"
    $script:_targetClickOnceDir = "\\$targetServer\Share$\MyApp\ClickOnce"
    $script:_targetSatelliteDir = "\\$targetServer\Share$\MyApp\Satellite"
    $script:_targetServicesDir = "\\$targetServer\Share$\MyApp\Services"
    if (!$skipBackup) { Backup-ExistingInstall }
    $script:_deployTS = [DateTime]::Now - $startTime   
    Write-Log ("Published to $targetServer in {0:N0} minute(s) and {1} second(s)`n" `
		-f [math]::floor($_deployTS.TotalMinutes), $_deployTS.Seconds)

Cleanup Before Backup

Before backing up the existing target folders, the script does some cleanup to remove some files and folders that are not desirable for backup. Mostly this cleanup is around removing old ClickOnce folders; because of the version naming convention there will quickly be a large number of folders.

The below function will look for folders ordered by modified time descending and select the first one to determine the most recent ClickOnce folder (you could argue some holes with that logic). It will then keep only that one, getting rid of all the others. In this way it will only be backing up the last ClickOnce folder. If you are starting fresh this is not a problem per se but in my case there were many existing versions out there already.
function Clear-OldFiles
    Set-Activity "Cleaning up old ClickOnce versions"
	Clear-OldClickOnceAppFiles $_targetClickOnceDir	
	# additional cleanup here removed...

function Clear-OldClickOnceAppFiles ($rootDir)
    if (!(Test-Path $rootDir))
        Write-Log "$rootDir doesn't exist; nothing to do"

    # exclude on subdirectory names doesn't seem to work
	# so we'll just rename dir, grab most recent child, copy over    
    $appFilesDir = (join-path $rootDir "\Application Files")
    if (!(Test-Path $appFilesDir))
        Write-Log "Didn't find Application Files folder at $appFilesDir; nothing to do"
    Write-Log ("Removing old ClickOnce app files beyond one version back from {0}" `
		-f $appFilesDir)
    $folders = @(Get-ChildItem -Path $appFilesDir -recurse `
		| Where-Object {$_.PSIsContainer})
    if ($folders.Length -le 1)
        Write-Log ("No old versions to remove (folder count was {0}); exiting" `
			-f ($folders.Length))
        Write-Log ("Found {0} ClickOnce version folder(s)" -f ($folders.Length))
    Write-Log "Renaming $appFilesDir to Application Files Temp"
    Rename-Item $appFilesDir "Application Files Temp"    
    Write-Log "Determining most recent ClickOnce app files subfolder"
    $appFilesTempDir = (join-path $rootDir "\Application Files Temp")    
    $mostRecentAppFilesDir = Get-ChildItem -Path $appFilesTempDir `
		| Where-Object {$_.PSIsContainer} `
		| Sort-Object LastWriteTime -Descending | Select-Object -First 1
    Write-Log "Most recent app files dir is $mostRecentAppFilesDir"        
    New-Item $appFilesDir -type directory
    Write-Log ("Copying {0} to $appFilesDir" -f ($mostRecentAppFilesDir.FullName))
    copy-item -recurse $mostRecentAppFilesDir.FullName $appFilesDir    
    $folderCount = ((Get-ChildItem -Path $appFilesTempDir `
		| Where-Object {$_.PSIsContainer})).Length
    Write-Log ("Removing {0} old version(s)" -f ($folderCount-1))
    Remove-Dir $appFilesTempDir
    Write-Log "Old ClickOnce app files removed"

Deploying Satellite Files

Because some of the client satellite files are loaded directly off the network by this app, some files will be locked if users are running the app. This function first calls a helper function that uses PowerShell remoting to disconnect file sessions to a remote server. I'm not a fan of loading assemblies directly off a network share; I think syncing the client files with the server and then loading on the client is better but it is what it is.

The function then deletes all the files in the target Satellite directory minus the config file. At the moment the deployment script is not updating configuration files though that was the plan in the beginning.

There is also a hackish sleep call between deleting files and copying as there were sporadic access denied errors of a locking / timing nature. Finally a call is made to a copy helper function that has support for additional attempts on error as well as outputting the results of what was copied.
function Publish-SatelliteFiles
    Set-Activity "Deploying Staging files"    
    Disconnect-FileSessions $_targetServer   
    # exclude deleting config - currently configured by hand until it can be automated
    Remove-RootFilesInDir $_targetSatelliteDir -exclude "MyApp.Client.exe.config"
    "Pausing between delete and copy"
    Start-Sleep -s 3
    Copy-Files -from "$_packagePath\Satellite\**" -dest $_targetSatelliteDir `
		-recurse -attempts 2

Deploying ClickOnce Files

The ClickOnce deployment is similiar.
function Publish-ClickOnceFiles
    Set-Activity "Deploying ClickOnce files"
    Disconnect-FileSessions $_targetServer
    Remove-Dir (join-path $_targetClickOnceDir "Application Files")    
    Remove-RootFilesInDir $_targetClickOnceDir
    "Pausing between delete and copy"
    Start-Sleep -s 3
    Copy-Files -from "$_packagePath\ClickOnce\**" -dest $_targetClickOnceDir `
		-recurse -attempts 2
    Copy-Files -from "$_packagePath\BuildInfo.csv" -dest $_targetClickOnceDir

Deploying Service Files

I posted Install a Windows Service Remotely with PowerShell a while back so refer to it for additional details such as the functions Uninstall-Service, Install-Service and Start-Service.
function Publish-Service
    Set-Activity "Deploying Service files"
    $serviceName = "MyAppDataService"
    Write-Log "Stopping, uninstalling service $serviceName on $_targetServer"
    Uninstall-Service $serviceName $_targetServer
    "Pausing to ensure files are not locked during delete..."
    Start-Sleep -s 5 # Yeah I know, don't beat me up over this
    Remove-RootFilesInDir $_targetServicesDir    
    Copy-Files "$_packagePath\Services\**" $_targetServicesDir -recurse
    Install-Service `
    -ServiceName $serviceName `
    -TargetServer $_targetServer `
    -DisplayName "MyApp Data Service" `
    -PhysicalPath "D:\Apps\MyApp\Services\MyApp.DataService.exe" `
    -Username "NT AUTHORITY\NetworkService" `
    -Description "Provides remote TCP/IP communication between the MyApp client application and the database tier."
    Start-Service $serviceName $_targetServer
The New-RestartServiceCommand function creates a batch file that restarts the Windows service. On each target server there is a scheduled task that invokes this batch file daily late at night. Originally that was done to help ensure any memory and network resources were properly released in the event of unexpected issues. The scheduled task is currently a one-time manual setup process though creating it could certainly be automated as well.
function New-RestartServiceCommand
    $file = (join-path $_targetServicesDir "MyAppServiceRestart.bat")
    "Creating $file for the nightly scheduled task to restart the service"
    if (Test-Path $file) { Remove-Item -Force $file }
    Add-Content $file "REM This is for automatically restarting the MyApp data service via a nightly scheduled task"
    Add-Content $file "net stop `"MyApp Data Service`""
    Add-Content $file "net start `"MyApp Data Service`""

Some Common Helper Functions

Some of the common helper functions used are below (functions detailed in other referenced posts are omitted).

File I/O

function Copy-Files([string]$from, [string]$dest, [switch]$recurse, [int]$attempts = 1)
    "Copying $from to $dest with recurse $recurse" 
    $result = $null
    for ($i=1; $i -le $attempts; $i++)
            $result = Copy-Item -Recurse:$recurse -Force -PassThru $from `
				-Destination $dest
        catch [System.Exception]
            if ($i -lt $attempts)
                ("Copy failed: '{0}'. Pausing. Max attempts: {1}, Attempts: {2}" `
					-f $_.Exception.Message, $attempts, $i)
                Start-Sleep -s 3
            else { throw }
    if ($result) {foreach ($i in $result) {("Copied {0}" -f $i.FullName)}}

function Remove-Dir([string]$path)
    if (Test-Path $path)
        Write-Output "Removing folder '$path'"
        Remove-Item -recurse -force $path

function Remove-RootFilesInDir([string]$path, [string]$pattern = "*.*", `
    $deleteWhat = (join-path $path $pattern)
    "Removing $deleteWhat"
    remove-item -Force $deleteWhat -Exclude $exclude

PowerShell Host Related

function Get-CanTranscribe
    # probably not the best way to answer this question but will at least rule out ISE
    return (!(Get-IsHostISE))

function Get-IsHostISE
    return ((Get-Host).Name -eq "Windows PowerShell ISE Host")

function get-scriptdirectory 
    if (Test-Path variable:\hostinvocation) 
        $FullPath=(get-variable myinvocation -scope script).value.Mycommand.Definition
    if (Test-Path $FullPath)
        return (Split-Path $FullPath) 
        Write-Warning ("Get-ScriptDirectory: Powershell Host <" + $ `
			+ "> may not be compatible with this function, the current directory <" `
			+ $FullPath + "> will be used.")
        return $FullPath


# note that net session \\computername /delete won't work w/remote deployment
#     NET SESSION displays incoming connections only.
#     In other words it must be run on the machine that is acting as the server.
# Enabling PS Remoting:
# 1) On target server ensure that winrm service is running
#    In PowerShell: get-service winrm
# 2) Enable PS remoting on the target server
#    Enable-PSRemoting –force
function Disconnect-FileSessions ([string]$server = $(throw "server is required"))
    "Disconnecting file sessions to $server"    
    $S=NEW-PSSESSION –computername $server
    INVOKE-COMMAND –Session $s –scriptblock { (NET SESSION /delete /y) }

function Invoke-InternetExplorer([string]$url)
    $IE=new-object -com internetexplorer.application

function Send-Email($from, $to, $subject, $body, $smtpServer = "", `
	$attachment = $null, $isHtmlBody = $true)
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)    
    $msg = new-object Net.Mail.MailMessage
    $msg.From = $from
    $msg.Subject = $subject
    $msg.IsBodyHtml = $isHtmlBody
    $msg.Body = $body
    if ($attachment)
        $att = new-object Net.Mail.Attachment($attachment)
    $att.Dispose | out-null

Other Functions

Other functions that are not as generic/common in nature but are included in the main script follow.

Retrieving Build Information

The build info file discussed previously in this series is packaged in the same directory as the script and read for displaying in the PowerShell console and in sending a deployment notification.
function Get-BuildInfo
	$buildInfoFile = (join-path (scriptdirectory) "BuildInfo.csv")
	return Import-Csv $buildInfoFile
A sample of this file:
"","2012.05.07.1008","5/7/2012 10:08:53 AM","117",""

Logging, Diagnostics and Progress

function Set-Activity([string]$activity)
    $script:_activity = $activity
    Write-Log "Current Activity: $_activity"

function Write-Log ([string]$message)
    write-output $message
    write-progress -activity $_activity -status $message

Backup Functions

Functions such as Backup-ExistingInstall, Backup-Dir, and Compress-Files are included in Compression Experiments In the Build and Deployment Process.

Deployment Email

function Send-Notification
    $buildInfo = Get-BuildInfo
    $env = $script:_envName
    $appVer = $buildInfo.AppVersion
    $fileVer = $buildInfo.FileVersion
    $publishVer = $buildInfo.ClickOncePublishVersion
    $builtAt = $buildInfo.BuiltOn
    $deployText = "deployed"
    if (!$_successfulDeploy) {$deployText = "deployment failed"}
    $subject = "MyApp v {0} {1} to {2} ({3})" -f $buildInfo.AppVersion, `
		$deployText, $script:_envName, $_targetServer
    $deployedFrom = [Environment]::MachineName
    $deployedBy = [Environment]::UserName
    $deployedAt = [DateTime]::Now.ToString("G")
    $successOrFail = ""
    if ($_successfulDeploy) { $successOrFail = "Successful: True" }
        $successOrFail = "Successful: False`n`n" + "Error: " + $_scriptErrorMessage + "`n"
    $deployTime = ""
    if ($_deployTS)
        $deployTime = ("Deployment completed in {0:N0} minute(s) and {1} second(s)`n" `
			-f [math]::floor($_deployTS.TotalMinutes), $_deployTS.Seconds)
    $br = "
" $body = @" MyApp deployment results follow.$br$br $successOrFail$br$br Environment: $env ($_targetServer)$br Run Webpage: $_clickOnceUrl$br$br App Version: $appVer$br Publish Version: $publishVer$br File Version: $fileVer$br$br Built At: $builtAt$br$br Deployed from $deployedFrom by $deployedBy. Deployment details are attached.$br $deployTime$br$br This message was sent by an automated process. "@ $to = "" Send-Email -from "$" -to $to ` -subject $subject -body $body -attachment $_transcriptFile }

In Conclusion

That wraps up the initial version of this deployment script. Potential changes going forward are:
  • Adjustments to re-integrate this with CI (build server is currently down)
  • Updating app config files from the script
  • Automating creation of a scheduled task to restart the service
  • Script refactoring and cleanup
  • Kicking off database and/or report script deployments from app script
  • "Down for maintenance" page for users
  • Dependent apps - another web app uses some business and data components of this app and it should be updated when this app is deployed

Potential future posts may be added soon on database and report deployment scripts and CI setup.

Build Automation Part 2: Building and Packaging

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

In the last post in this series I discussed an overview of some build and deployment automation goals and covered some pre-build activities. In this post I cover compiling the code and packaging the files to be deployed.

Building the Code

In the Compile target the main solution file is built along with a few misc. projects that are not part of the main solution file. Refer back to part 1 for the definition of some of the items used such as $(CompileDependsOn), @(SolutionFile), etc.

One item that is not obvious here is @(ServiceAppBuild); basically there is another WCF service application that runs locally on the end user's machine in parallel with the client app. This application has its own build script and the client app being built passes in a ClientOutDir property that tells the service application build script where to copy the service app client files to after they are built. Really I wanted this piece to be handled by an internal NuGet package; unfortunately I had some issues setting up an internal NuGet server and I ran out of time.
<Target Name="Compile" DependsOnTargets="$(CompileDependsOn)">
    <Message Text="Starting compile of @(SolutionFile)"/>

      <SharedLibPath Include="..\MyApp.Shared\_Lib"/>

    <Message Text="Building service app projects; client output will be copied to %(SharedLibPath.FullPath)"/>

    <!-- First we need to copy the client files of this dependency app into our client app -->
    <!-- Really we need to change this so the dependency is nuget based but that'll have to be done later -->
    <MSBuild Projects="@(ServiceAppBuild)" Targets="Rebuild"

    <MSBuild Projects="@(SolutionFile)" Targets="Rebuild"
    <Message Text="Compile complete of @(SolutionFile)"/>

    <Message Text="Compiling misc other related such as Service Controller"/>
    <MSBuild Projects="@(MiscOtherToBuild)" Targets="Rebuild"

    <Message Text="All compilation is done"/>

Breaking Up the Build Script

To keep the main MSBuild script from getting too lengthy I split it as follows:
  • - Main driver script that is always the entry point. Contains pre-build and build targets and wrapper targets to call into other build scripts.
  • - Handles taking all the compiled output of the main build script and copying the appropriate content to a deployment folder and packaging that up in a compressed archive.
  • - Common tasks and properties that both the main build script and the package script need.
The main build script imports the others:
<Project xmlns="" ToolsVersion="4.0"

  <Import Project=""/>
  <Import Project=""/>
  <!-- ... -->

Setting Up the Packaging Build Script

Imports, Property and Item Groups

<Project xmlns="" ToolsVersion="4.0">
  <Import Project=""/>
  <Import Project="..\MyApp.Client\MyApp.Client.vbproj"/>

    <RemovePackageFilesAfterZip Condition=" '$(RemovePackageFilesAfterZip)' == '' ">true</RemovePackageFilesAfterZip>
    <DeployPackageFolderItem Include="$(DeployPackageFolder)"/>
    <ClientProjectItem Include="..\MyApp.Client\MyApp.Client.vbproj" />
  <!-- ... -->

Packaging Initialization

This target creates an error if the build number is not defined and then removes and recreates a deployment package folder where the deployable content will be placed.
<Target Name="Init">
    <Error Condition="'$(BUILD_NUMBER)' == ''" Text="Build number is required" />

    <MSBuild.ExtensionPack.FileSystem.Folder TaskAction="RemoveContent" 
        Condition="Exists(%(DeployPackageFolderItem.FullPath))" />
    <RemoveDir Directories="@(DeployPackageFolderItem)"/>
    <MakeDir Directories="@(DeployPackageFolderItem)"/>

Publishing the ClickOnce Client Project

This target creates the ClickOnce manifests of the client project and is the equivalent of Build-->Publish in Visual Studio. Effectively the client project gets built twice, once as a part of the main solution, and again when packaged in Publish mode. Refer back to part 1 for the CreateLicenseFiles dependency.

I will speak to some of the details following the target definition:
<Target Name="BuildClickOncePublishFiles" DependsOnTargets="Init;CreateLicenseFiles">
    <Error Condition=" '$(TargetServer)' == '' "
    Text="'/p:TargetServer:server-name' is required to generate ClickOnce publish files"/>

    <GetClickOnceNextVersion TargetServer="$(TargetServer)" BuildNumber="$(BUILD_NUMBER)">
      <Output TaskParameter="ReturnValue" PropertyName="ClickOnceAppVersion"/>

      <Output TaskParameter="Path" PropertyName="SdkPath" />
    <Message Text="SdkPath: $(SdkPath)" />

    <!-- Other properties: Platform, PublishUrl, InstallUrl, Platform (i.e. x86)-->
    <MSBuild Projects="@(ClientProjectItem)"
      <Output ItemName="OutputFiles" TaskParameter="TargetOutputs"/>

    <CreateItem Include="$(DeployPackageFolder)ClickOnce\">
      <Output TaskParameter="Include" ItemName="ClickOnceDeployPath" />

    <Error Condition="!Exists(%(ClickOnceDeployPath.FullPath))"
    Text="Expected ClickOnce folder $(ClickOnceDeployPath) to exist. 
	Either a partial target was run and not a full 
	build, build output was not at expected location, and/or build output copy failed." />

Getting the Next ClickOnce Version Number

The target first raises an error if there was not a TargetServer value passed the build script; this value is used in the next step to determine the next ClickOnce revision number. The build info file created in part 1 gets deployed out to the target server along with the rest of the app. The below task reaches out to the target server, looks for the build info file, reads in the current ClickOnce revision number (or uses 0 if the file was not found), increments that value, writes the local file back out, and returns the full publish version for the new build.
<UsingTask TaskFactory="PowershellTaskFactory" TaskName="GetClickOnceNextVersion" AssemblyFile="$(PowerShellAssembly)">
      <TargetServer Required="true" ParameterType="System.String" />
      <BuildNumber Required="true" ParameterType="System.String" />
      <ReturnValue Output="true"/>
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Getting ClickOnce next publish version")
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "TargetServer is $targetServer")  
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "BuildNumber is $buildNumber")    
  $buildInfoFile = "\\$targetServer\SomeHiddenShare$\MyApp\BuildInfo.csv"
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Looking for $buildInfoFile")
  $appRev = -1
  if (Test-Path $buildInfoFile)
      $obj = import-csv $buildInfoFile
      $appRev = [int32]$obj.ClickOnceRevision
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Found remote build info file; ClickOnceRevision is $appRev")
  $nextRev = $appRev + 1
  #Major, Minor, Build, Revision
  $buildVer = new-object System.Version($buildNumber)
  $clickOnceVer = new-object System.Version($buildVer.Major, $buildVer.Minor, $buildVer.Build, $nextRev)
  # need to update local build info file with new publish version
  $localBuildInfoFile = "$(BuildArtifacts)BuildInfo.csv"
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Local build info file is $localBuildInfoFile")
  if (!(Test-Path $localBuildInfoFile))
    throw "Failed to find expected build info file $localBuildInfoFile"
  $obj = import-csv $localBuildInfoFile
  $obj.ClickOnceRevision = $nextRev
  $obj.ClickOncePublishVersion = $clickOnceVer
  $obj | export-csv $localBuildInfoFile -notypeinformation  
  $returnValue = $clickOnceVer.ToString()
  $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Next ClickOnce publish version is $returnValue")    

ClickOnce App Identity and Multiple Environments

The ClickOnce publish version is a curious, brittle little dependency. At one point I thought that there would be no problem with using the same publish version across different environments. After deploying the app to a test environment however, ClickOnce generated an error that the app was already installed; the fact that it was installed from another server (dev vs test) made no difference.

As a result, for the first deployment to each target environment, I started the ClickOnceRevision value at a different starting number such that the likelihood of a collision would be rare. Additionally, since the full ClickOnce publish version here is based on the Major, Minor, and Build number of the app version (which typically changes with each push), a conflict would only happen to begin with when deploying the same app version to another server with the same ClickOnce revision number. Furthermore, only so many versions are kept in the ClickOnce cache and generally only developers or business experts would likely run into this problem.

You can use Mage or MageUI to change the name of the deployment manifest to avoid this issue. Some mention changing the app's assembly and product names for each environment but to me that causes as many problems as it solves and with our 8 environments it is not ideal. Still if your ClickOnce app is made available from the Start menu (ours is online only) this could be more of a need.

Updating the ClickOnce Web Page

We previously modified the default ClickOnce web page that Visual Studio generates to change the .net FX bootstrapping and to disable the Run button for a while after users clicked it so they would not get multiple instances of the app launched accidentally if they double-clicked it or otherwise hit it again because the app didn't appear to launch quickly enough.

This target collects various pieces of build information and calls another target to update placeholders in the webpage to reflect the current app version, file version, publish version and built on date.
<Target Name="CreateClickOnceWebPage">    
      <ClickOnceDefaultFile Include="default.htm" />

    <Copy SourceFiles="@(ClickOnceDefaultFile)" 
		DestinationFolder="$(DeployPackageFolder)ClickOnce\" />

      <Output TaskParameter="ReturnValue" PropertyName="FileVersion"/>

      <Output TaskParameter="ReturnValue" PropertyName="ClickOnceVersion"/>

      <Output TaskParameter="ReturnValue" PropertyName="BuiltOnTime"/>

    Text="Updating ClickOnce web page $(DeployPackageFolder)ClickOnce\default.htm with ClickOnce Version $(ClickOnceVersion), File Version $(FileVersion)"/>

The tasks that retrieve the build properties could be consolidated but currently look like:
<UsingTask TaskFactory="PowershellTaskFactory" TaskName="GetFileVersion" AssemblyFile="$(PowerShellAssembly)">
      <ReturnValue Output="true"/>
            $obj = import-csv $(BuildArtifacts)BuildInfo.csv
            $returnValue = $obj.FileVersion

  <UsingTask TaskFactory="PowershellTaskFactory" TaskName="GetClickOncePublishVersion" AssemblyFile="$(PowerShellAssembly)">
      <ReturnValue Output="true"/>
            $obj = import-csv $(BuildArtifacts)BuildInfo.csv
            $returnValue = $obj.ClickOncePublishVersion

  <UsingTask TaskFactory="PowershellTaskFactory" TaskName="GetBuiltOnTime" AssemblyFile="$(PowerShellAssembly)">
      <ReturnValue Output="true"/>
            $obj = import-csv $(BuildArtifacts)BuildInfo.csv
            $returnValue = $obj.BuiltOn
The UpdateClickOncePage task takes those values in as parameters and replaces special placeholders with them.
<UsingTask TaskFactory="PowershellTaskFactory" TaskName="UpdateClickOncePage" AssemblyFile="$(PowerShellAssembly)">
      <WebPageFilename Required="true" ParameterType="System.String" />
      <ClickOnceVersion Required="true" ParameterType="System.String" />
      <FileVersion Required="true" ParameterType="System.String" />
      <AppVersion Required="true" ParameterType="System.String" />
      <BuiltOn Required="true" ParameterType="System.String" />
      Set-ItemProperty $WebPageFilename -name IsReadOnly -value $false
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "WebPageFilename is $WebPageFilename")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "ClickOnceVersion is $ClickOnceVersion")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "FileVersion is $FileVersion")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "AppVersion is $AppVersion")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "BuiltOn is $BuiltOn")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Reading WebPage content")
      $page = Get-ChildItem $WebPageFilename      
      $content = [string]::join([environment]::newline, (get-content $page))
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Replacing version tokens with version numbers")
      $content = $content.Replace("<!--+(ClickOnceVersion)-->", $ClickOnceVersion)
      $content = $content.Replace("<!--+(FileVersion)-->", $FileVersion)
      $content = $content.Replace("<!--+(AppVersion)-->", $AppVersion)
      $content = $content.Replace("<!--+(BuiltOn)-->", $BuiltOn)
      Set-Content $WebpageFilename ($content)
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Web Page Content modified")

Staging the Files For Deployment

The ClickOnce files have already been staged into a $(DeployPackageFolder)ClickOnce\ folder and now the same is needed for Satellite files, Service files, and deployment scripts.


<Target Name="PackageDeployment" 
	<CreateProperty Value="bin\x86\$(Configuration)\">
	  <Output TaskParameter="Value" PropertyName="BinOutput"/>
	<Message Text="Bin output path is $(BinOutput)"/>

	<!-- ... -->

Staging Service Files

<CreateItem Include="..\MyApp.Service\$(BinOutput)*.*">
  <Output TaskParameter="Include" ItemName="ServiceSourceFiles" />
<CreateItem Include="%(DeployPackageFolderItem.FullPath)Services\">
  <Output TaskParameter="Include" ItemName="ServiceDestFolder" />

<Message Text="Copying service source files to @(ServiceDestFolder)"/>


<Error Condition="!Exists(%(ServiceDestFolder.FullPath))"
Text="Expected services folder %(ServiceDestFolder.FullPath) to exist. Either a partial target was run and not a full 
build, build output was not at expected location, and/or build output copy failed." />

Staging Satellite Files

This app has various satellite module assemblies that get loaded dynamically off a network share. Other assemblies and files such as config files may not get loaded directly off the network but may be copied to client machines from the network. The intent behind most of these files is allowing for certain updates to files without having to rollout a new application build.

This section includes and excludes specific files to copy just those files intended for satellite distribution.
  <SatelliteSourceFiles Include="..\**\MyApp.*.dll; ..\**\MyApp.*.pdb; ..\**\ThirdParty.*.dll; 
	..\**\ThirdParty.*.pdb; ..\**\*.xslt; ..\**\*.css"
   Exclude="..\**\*Shared*; ..\**\*Oracle*; ..\**\*Database*; ..\**\*Business*; ..\**\obj\; 
	..\**\*MyApp.Client*; ..\**\*MyApp.Console*; ..\**\*MyApp.Service*" />
  <SatelliteSourceFiles Include="..\ThirdParty.Library\ThirdParty.dll"/>
  <SatelliteSourceFiles Include="..\MyApp.Client\MyApp.Help.chm"/>
<CreateItem Include="%(DeployPackageFolderItem.FullPath)Satellite\">
  <Output TaskParameter="Include" ItemName="SatelliteDestFolder" />

<Message Text="Copying satellite source files to @(SatelliteDestFolder)"/>

<Error Condition="!Exists(%(SatelliteDestFolder.FullPath))" 
Text="Expected satellite folder %(SatelliteDestFolder.FullPath) to exist. 
Either a partial target was run and not a full build, build output was not 
at expected location, and/or build output copy failed." />

<CreateItem Include="$(DeployPackageFolder)Satellite\ServiceApp\">
  <Output TaskParameter="Include" ItemName="ServiceAppDeployPath" />

<MakeDir Directories="@(ServiceAppDeployPath)"/>

<Message Text="Staging Service app files to %(ServiceAppDeployPath.FullPath)"/>
<StageServiceApp DestStagingDir="%(ServiceAppDeployPath.FullPath)" />

<Error Condition="!Exists(%(ServiceAppDeployPath.FullPath))"
Text="Expected satellite ServiceApp folder %(ServiceAppDeployPath.FullPath) to exist. 
Either a partial target was run and not a full build, build output was not at expected 
location, and/or build output copy failed." />
The prior target called this StageServiceApp task which invokes a PowerShell script of a dependent service app to copy the appropriate build output of that app to the Satellite folder of this ClickOnce app.
<UsingTask TaskFactory="PowershellTaskFactory" TaskName="StageServiceApp" AssemblyFile="$(PowerShellAssembly)">
      <DestStagingDir Required="true" ParameterType="System.String" />
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Staging ServiceApp to $DestStagingDir")
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Loading ServerStageServiceApp.ps1")
      . ..\..\..\..\Common\SomeServiceApp\Main\Code\SomeServiceApp\ServerStageServiceApp.ps1
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Performing staging of service app")
      ServerStage $DestStagingDir
      $log.LogMessage([Microsoft.Build.Framework.MessageImportance]"High", "Service app staging complete")

Staging Deployment Scripts

Finally some PowerShell deployment scripts are copied into the deployment package folder along with the build info file and a compression assembly used to backup the existing installation.
<Message Text="Copying deployment scripts into deploy folder"/>
  <BuildItemsToCopy Include="$(BuildFolder)Deploy*.*"/>
  <!-- might consider removing Ionic later since we are using 7zip; Ionic was faster at network zip ops though -->
  <BuildItemsToCopy Include="$(BuildFolder)Ionic.Zip.dll"/>
  <BuildItemsToCopy Include="$(BuildArtifacts)BuildInfo.csv"/>
<Copy SourceFiles="@(BuildItemsToCopy)" DestinationFolder="@(DeployPackageFolderItem)"/>

<!-- we've copied into deploy folder so we can remove this one -->
<Delete Files="$(BuildArtifacts)BuildInfo.csv"/>

Compressing the Deployment Package Folder

At this point all the files that need to be deployed (or that perform the deployment) reside inside a root deployment package folder. This section of PackageDeployment target first creates a normal zip file of that folder using 7-zip, creates a self-extracting executable from that, and finally deletes the original zip file and deployment package folder as everything that is needed is in the SFX package.

I am not going to go into details here about how the compression is done as I covered that in this post and I like to follow the DRY principle for my blog too :).
  <FilesToZip Include="$(DeployPackageFolderItem)**\*" 
  Exclude="..\**\*.build; ..\**\*.licx"/>

<Message Text="Creating zip archive of contents of @(DeployPackageFolderItem)"/>

<!-- Switched to 7-zip cmdline since extension pack zip can't create self-extracting exectuable (sfx)
	 DotNetZip (Ionic) can create sfx but it was hanging zipping up some build artifacts for some reason
<MSBuild.ExtensionPack.Compression.Zip TaskAction="Create" CompressPath="@(DeployPackageFolderItem)"
  ZipFileName="$(BuildArtifacts)" RemoveRoot="@(DeployPackageFolderItem)"/>
<Exec Command="$(BuildFolder)Zip-Install-Create.bat "%(BuildArtifactsItem.FullPath)"" />

<!-- file is in zip and can be removed now -->
<Delete Files="$(DeployPackageFolder)BuildInfo.csv"/>

<!-- now that contents are zipped, delete package folder -->
<MSBuild.ExtensionPack.FileSystem.Folder TaskAction="RemoveContent" 
Path="@(DeployPackageFolderItem)" Force="true" RetryCount="5" 
Condition="Exists(%(DeployPackageFolderItem.FullPath)) AND $(RemovePackageFilesAfterZip) == true" />

<RemoveDir Directories="@(DeployPackageFolderItem)" 

Calling the Build Script Now

Back in the main script, there is a convenience target defined to compile everything and then call the package deployment target.
<Target Name="BuildAndPackage" DependsOnTargets="Compile;PackageDeployment"/>
So now back in PowerShell a more complete call to the build script might look like one of the below:
msbuild /t:BuildAndPackage /p:BUILD_NUMBER= `
msbuild /t:BuildAndPackage /p:BUILD_NUMBER= `
	/p:TargetServer=app-server-test /p:Configuration=Release 
At this point the EXE can be run to extract the contents and launch the PowerShell script to deploy the app. Because custom parameters cannot really be passed to the EXE on through to the deployment script, additional build script changes are required to indicate whether this is an automated CI build and deployment or one being run in a user interactive mode. Alternatively such an indicator parameter could be used to not create the SFX for a CI build but just the zip file, or to not delete the zip file and leave both. I'll leave such decisions as an exercise for the reader :).

What's Next?

I may post some details on the deployment scripts and/or Team City in the future. Happy building and deploying!