Recent Tweets

Entries in oracle (5)


Build Automation Part 4: Database and Report Deployments

Series Index

Build Automation Part 1: Overview and Pre-build Tasks
Build Automation Part 2: Building and Packaging
Build Automation Part 3: App Deployment Script
Build Automation Part 4: Database and Report Deployments

Unlike deploying the application itself in its entirety each time, database and report items have required incremental deployments due to their nature and sheer size. The question becomes how to manage these increments in coordination with the application bits to ensure everything is in sync.

I do not pretend to have all the answers here. I was hoping this part of the process could get overhauled more with our last release but there is only so much time in the day.

Some Challenges

Size, Dependencies, Impact Analysis

On the (Oracle) database side of our operational app we have around 800 tables, 500+ packages, 500+ views, 300+ functions and procedures, and a variety of other object types across 3 schemas. Running schema compares / diffs and doing sync scripts has been a time consuming pain in the past, regardless of various tools we have tried.

The situation is complicated by some 10 database environments, other app databases our app is dependent on, various apps depending on our database, and dynamic SQL embedded into report RDL files or coming from ORM's like Entity Framework. Dependency and impact analysis can be difficult, particularly across systems.

On the report side this app has over 600 SSRS reports scattered over 8 servers, several folders and different versions of SQL Server Reporting Services.

Source Control

Source control for reports has not been a problem so much with using SSRS, other than TFS generating a lot of unnecessary merge conflicts on those XML RDL files.

On the database side we have had some success using Team Coding in Toad with the TFS MSSCCI Provider. Quest came out with a native TFS provider but it did not support TFS work item association which ruled out our use of it.

The MSSCCI provider "works" with Toad for basic changes like packages, procedures, functions, triggers, views, etc. but boy it is not without a lot of quirks. So much so that for a while I saved any database object script changes to files on my desktop out of fear my changes would get overwritten, which used to happen quite a bit. The other problem is that not all changes are source controlled such as table schema changes, data changes etc. We offset that via some SQL file attachments to tasks tied to a project.

Pulling Report and Database Changes

I posted before about a custom TFS Artifact Manager (hereinafter "TAM") tool for pulling down report and database changeset files and task attachments either for a single work item or a large batch of related work items, such as all tasks linked through a series of scenarios, backlog items and bugs tied to a project. I won't repeat those details here but we currently still use the tool to gather up all the report and database changes for a given project release. It far from perfect but it beats manual guesswork and building sync scripts from database compares.

The TAM tool is also used to pull artifacts from various single task ad-hoc changes made outside of any official product release. Many reports can be added to the application dynamically through some dynamic report forms with common parameters; new records are added to the database and the reports are deployed outside of any app deployment. Likewise there are occasional database changes made independent of the application.

There are other tools mentioned in the TAM post that we may try using more in the future. Also, Troy Hunt has a good series of deployment posts including this one regarding Automated database releases with TeamCity and Red Gate. Doing exactly that with our schemas would make me a bit nervous but perhaps with tweaks and some experiments in the future.

Additionally I posted a review on Red Gate's Schema Compare for Oracle which can be quite a useful tool. We don't rely on it as heavily anymore with custom tools and processes but it is handy to use it to double-check things after deployments or for doing more one-off database syncs.

Deploying Reports

Ad hoc Report Deployments

When deploying from Development to Test, our CI process picks up any reports merged from $/MyApp/Reports/Dev/ to $/MyApp/Reports/Main/ and automatically deploys those via PowerShell and RS Utility. Any corresponding database changes are manually applied before that. Environments beyond Test currently require attaching report and database changes to a Help Desk ticket and routing to a DBA. The DBA runs any database changes and deploys the reports using the script mentioned in the next section.

App Report Deployments

For reports to be deployed with a given build of an application, the process is basically the same as the ad-hoc process in respect to going from Dev to Test. One difference is on timing of merging the report changes in source control to correspond with any dependent changes to the application code. When moving beyond the Test environment, all reports tied to a given project work item are pulled using the TAM tool. They are then deployed in mass using PowerShell and the SSRS web service (without RS Utility), in a manner similar to this post on Uploading SSRS Reports with PowerShell.

Deploying Database Changes

We currently do not have much automation around ad hoc database deployments but not much is needed there.

For app database changes we start by pulling the database changes using the TAM tool. In that tool a script "package" is built by choosing the scripts to include and specifying any needed order or execution. Previously someone (dev or DBA depending on environment) would either execute all those manually by hand in Toad, or build out an index/driver script and run that. It was not as bad as it might sound, given the tool produced combined SQL scripts for views, packages, procs, etc. Still it was tedious if there were a number of data or schema migration scripts to be run in order.

Executing Database Scripts With PowerShell

Our resident Oracle expert Jim Tilson ("The Optimizer") had the idea of creating a PowerShell script that used SQL*Plus to generically execute all database scripts in a given directory. I paired with him to get the basic interaction going but this is his brainchild and work. He should probably be the one explaining this but no telling when that slacker will get around to blogging :). If you have any interest in Oracle, SQL optimization, database tech in general, or Ruby, you should reach out to him on Twitter and ask him to blog more (and tweet more while he is at it). At any rate this might be useful for others so I will post the code and attempt to explain it.


The script expects all the database script files to be located in subfolders where the script resides, one folder per schema name, and no subfolders with each schema folder (not recursive).

Each file in a schema folder will be executed regardless of filename extension. Ordering is based on filename; our TAM tool prefixes a numeric wart on each file to ensure an obvious order. At the moment the script does not explicitly specify a name ordering but that's the default.

Executing Scripts for Each Schema

At the bottom of the script, the server TNS name is prompted for and a couple things are set before running the main run-scripts function.
set-location (Get-ScriptDirectory)
$dateWart = Get-DateWart
$server = read-host "Enter the server TNS name"
echo "Successfully ran all scripts."
Run-Scripts invokes a function to run the scripts for each schema, passing along the user and server info. This could be made more generic by assuming any subfolder where the PowerShell script resides represents a schema with database scripts to run.
function Run-Scripts
    foreach ($user in @("USER_SCHEMA_1", "USER_SCHEMA_2", "USER_SCHEMA_3")) 
		run-scriptsforschema -server $server -user $user

Running Scripts for a Schema

This function will temporarily set location to the schema subfolder corresponding to the user/schema name passed in. It prompts for a password for a later connection to that schema. Finally it enumerates all files in the schema folder, calls a Run-Script function to execute each, and writes progress as it goes.

Each successfully executed file is moved into a Completed subfolder. That file move is important as many migration scripts are written assuming they will only be run once and we have been bit by DBA's accidentally running scripts more than once.
function Run-ScriptsForSchema($user)
    echo "Running scripts for $user."
    set-location $user    
	$password = get-password($user)    

    $files = @(get-childitem | where {!$_.PsIsContainer})
	$count = 0
    foreach ($fileInfo in $files)
        write-progress -activity "Running scripts for $user" `
			-currentoperation $ -status ("Executing") `
			-PercentComplete (100*$count/$files.count)
        Run-Script $user $password $fileInfo
        write-progress -activity "Running scripts for $user" `
			-currentoperation $ -status ("Done") `
			-PercentComplete (100*$count/$files.count)
        move-item -path $fileInfo.fullname -destination ".\Completed" -Force
    write-progress -activity "Running scripts for $user" -status ("Complete") -completed
    echo "Completed scripts for $user"

Running a Script

The Run-Script function takes care of some logging and calls a normalize function to tack on additional SQL before and after the SQL contained in the file (error handling, commits, etc.); more on that in a moment. Function notes follow.
  • Path - the sqlplus location should be in the SYSTEM PATH environment variable so fully qualifying it should not be needed. In my case the location is C:\app\[username]\product\11.2.0\client_1\.
  • -L Parameter - instructs the app to only attempt to log on once; otherwise w/bad credentials it can get hung awaiting input.
  • -M Parameter - indicates HTML output is desired from sqlplus.
  • Credentials - The server name captured earlier is passed in along with the user/schema and password parameter values.
  • SQL - the normalize script function returns a SQL string and that is piped into sqlplus to be executed.
  • Output - the output is sent to $logFile and the 2>$1 sends standard error to standard output.
  • Error checking - Finally $LASTEXITCODE is checked to see what sqlplus.exe exited with; if 0 it was successful, otherwise it is the Oracle error number. The process stops on any error; manual changes might be needed to address any problems then the script can be run again.
function Run-Script($user, $password, $fileInfo)
    $logDir = ("..\Logs\{0}\{1}" -f $dateWart, $user)
    ensure-directory $logDir
    $logFile = join-path $logDir ($fileInfo.basename + ".html")
    (normalize-script $fileinfo.fullname) | sqlplus.exe -L -M "HTML ON SPOOL ON" `
		-S "$user/""$password""@$server" >> $logfile 2>$1

    $lec = $LASTEXITCODE
    if ($lec -ne 0)
        write-error ("ERROR executing {0}!" -f $fileInfo.FullName)

Adjusting the SQL

There are two critical adjustments made to the SQL read from the database script files to execute. The first is detecting a SQL error and exiting SQL*PLus with the error code. The other is issuing a commit at the end; most of our data related scripts do not include a commit as often they are ran and verified before issuing a commit. It is worth reading over the WHENEVER SQLERROR documentation as some types of errors will not trigger an exit; fully checking for all types of errors might require something more brittle like scanning the log files for certain error phrases.
function normalize-script($filename)
    whenever sqlerror exit sql.sqlcode
    set echo off
    set termout off
    $([string]::join("`n", (get-content $fileinfo.fullname -readcount 0)))

Helper Functions

At the top of the script are some helper functions and an interop services dll is loaded for later use in translating the secure password to a plain text string to be passed along to SQL*Plus.

function Get-ScriptDirectory
    Split-Path ((Get-Variable MyInvocation -scope script).Value.MyCommand.Path)

function ensure-directory($dir)
    if (!(test-path $dir)) { new-item $dir -type directory }

function Get-DateWart()
    get-date -uformat "%Y %m %d %H %M %S"

function get-password($user)
    $enterpassword = read-host -AsSecureString "Password for $user@$server"

Analyzing the Results

The script stores logs under a Logs\[Timestamp]\Schema\ folder for troubleshooting and verification purposes.

SQL*Plus Alternatives

One alternative to SQL*Plus is using OracleCommand's ExecuteNonQuery method in Oracle.DataAccess.dll. I tried this approach back when I created an OracleScriptExecutor utility app that was designed to easily run SQL Scripts against multiple schemas. It was a bit of a nightmare that I do not recommend. For one you have to deal with annoyances like linefeed issues, semicolon and BEGIN/END block issues, and it is quite difficult to deal with multiple scripts combined in one SQL file (i.e. '/' delimited). It almost requires a full blown SQL parsing engine to handle it correctly so I'd rather delegate that pain to a tool like SQL*Plus that already handles such complexity.

In Conclusion

This really only scratches the surface of the problems and solutions in dealing with database and report deployments. With some time and TLC I am sure this beast could be further tamed. Thoughts, suggestions, tips, helpful tools, processes? Leave a comment!

I am running out of time in this series but hopefully I can touch on some CI details with TeamCity next.

Red Gate Schema Compare for Oracle Review

As part of my recent role in application release management, I have had to compare and sync various objects across Oracle databases. This was not fun work and I decided to evaluate some different tools to make this easier. The tool I have been working with recently is Red Gate's Schema Compare for Oracle. In this post I provide a brief walkthrough of its basic features and offer up my thoughts on the tool as well as how I feel it compares with other alternatives. The version of the tool shown here is

The Need

The major need I have for a database comparison and sync tool is for parallel deployments within the same environment; for example, syncing the schemas of one of our 3 development databases with another. These syncs happen at different points including when business validation starts near the end of an iteration, when parallel development is needed for application production hotfixes, or during different development or testing efforts. The other major need is for ensuring databases are largely in sync when they are expected to be, when preparing for or reviewing a deployment or troubleshooting differences in application behavior across environments.

We largely forego using such tools for a couple scenarios. First our DBAs handle larger syncs such as refreshing a development database from production. Likewise we do not use such tools for change management to deploy changes from development on up the chain to production; I discuss that somewhat in my post on the TFS Artifact Manager.

Starting a New Compare

The process starts with creation of a new project and selection of the databases and schemas to be compared.

Next there are various options that should be reviewed before starting; each has a full description to the right when hovering with the mouse. There are a couple I want to call attention to after the screenshot.

There are some options Red Gate was kind enough to add for me based on my early feedback:
  • Include SET DEFINE OFF - Some objects in my original comparisons had ampersands (&) and as a result I was getting prompted for substitution variables when running the generated script. This disables that; it is not turned on by default but I would recommend checking it.
  • Ignore slow dependencies - This has to do with type objects and REF constraints as they relate to cross-schema dependencies. This greatly increased performance early on when dealing with very large schemas. Other performance improvements have been made that have made this less important now.
  • Ignore dependent objects in other schemas - This option also was helpful in performance but I used it more to help prevent pulling in dependent objects in other schemas upfront. With the way our security is setup, we often cannot perform DDL operations logged into the database as another user. We generally compare and deploy one schema at a time, logged in as the schema owner. Using this option prevents me from manually having to exclude various objects in other schemas later on when using the Deployment Wizard.

Options - What's Missing?

When contrasted with a schema comparison tool such as what is available in Toad for Oracle, the options here seem very limited. However I personally find the 100+ checkboxes across 4 tabs in Toad to be rather dizzying and honestly I do not really know what at least a third of them truly do (some do have status bar hints on mouse-over). Also I have found with Toad that with certain checkboxes set, various Oracle errors can get thrown such as No Data Found and that can stop the whole process.

On one hand it seems logical to have options for the object types to compare. On the other hand, I do not think having that would make a big difference in performance and Schema Compare allows you to easily exclude and filter objects based on type afterwards. There is something to be said for simplicity and that is Red Gate's mantra.

One option from Toad I do really miss is "Format before comparison". In a perfect world everyone would use the same SQL formatting styles but in reality it is not easy to enforce. Many of the objects that were different only differed by formatting. The tool has options to ignore case and white space which helps but formatting differences were also around multi-line constructs in terms of wrapping, nesting, aligning and the like. I realize that full formatting functionality is far from trivial but this was a key option in my use.

Running the Database Comparison

After setting up the connection and options, clicking the Compare Now button will launch a progress dialog that allows cancelling the operation.

This is where I first ran into an issue with the initial trial version I used. Against one of our large schemas (some 3400+ objects), the tool was taking hours to run. I contacted Red Gate support to detail the performance issue as well as some other suggestions I had. They were immediately responsive and had me turn on verbose logging to troubleshoot the issue. Within the same business week they had a new non-public build of the application for me to try which had the new options mentioned above on performance, cross-schema impacts, SET DEFINE OFF, etc. The level of support they provided me is pretty much unheard of in my experience.

The performance options helped but running the tool was still very slow compared to minutes with Toad. I was told one reason for this was because Schema Compare builds a full dependency tree to script things in a correct order whereas I believe Toad and some other tools make no attempt to do this. In some cases order is not a critical as PL/SQL objects and views can be easily recompiled afterwards. In other cases it is more important and either way it makes the process smoother.

In looking at the log file and database activity, we could see the SQL the tool was generating. At that point our resident SQL optimizer Jim Tilson was able to refactor the SQL to make very significant performance improvements, in one case from around 10 minutes to under a second. We sent the SQL modifications back to Red Gate and again they quickly implemented these changes and provided us new application builds that dramatically reduced the time to reasonable numbers.

In the end it turns out most of our performance issues were largely because we are still using an ancient 9i database due to a legacy app or two. I believe stats are not automatically gathered for the SYS schema on 9i and doing so almost implies logging in as the SYS schema which is far from ideal. Performance should be fine with gathering SYS stats on 9i or using an Oracle database from the past decade which handles this automatically.

Currently the tool is running at < 2 minutes for a schema of ~800+ objects and around 28 minutes for one of 3400+ objects. However this is atypical in that we are still on 9i, haven't generated statistics on the SYS schema, have a fairly large data model and we are running database restores at the moment. For most the execution time should be minimal.

Reviewing the Differences

When finished the differences are broken down by difference status and there are various options for grouping, filtering, sorting and searching. Selecting an object presents a nice SQL difference viewer to navigate each difference.

If Schema Compare was closed at this point, the options would be saved but the comparison results would not be. A snapshot feature is provided which will do just that but it would be convenient if the app automatically created that snapshot, or at least prompted to do so.

Each differing object can be checked or unchecked individually or in batch to determine the contents of the sync script to be generated. Once the desired items are included, the Deployment Wizard can be invoked to build the sync script.

Building the Sync Script

The deployment wizard walks through the major steps:
  1. Deployment method - The tool can execute the sync script or can just display, save, or send it to another program to be run. Personally I recommend coping the generated script to the clipboard an running it in the DBMS de jour, be that SQL Plus, Toad or what have you. If the script is executed in Schema Compare, any error can halt execution and it can be difficult to know where the error occurred (exact object / SQL).
  2. Review dependencies - Allows including objects that reference or are referenced by the items chosen to be deployed
  3. Configure script - Optionally shown if there are deployment issues to be addressed; allows choosing different actions to control the behavior.
  4. Review script - Shows the deployment script and a summary of changes, as well as any warnings that may need to be considered.

Running the Sync Script

The sync SQL that Schema Compare generated ran without issue overall. Early on I did receive a permissions related error but that was because I did not realize I had included an object in another schema other than the one I was logged into to run the SQL. The options Red Gate added on my behalf helped avoid this going forward but others may not have this issue if their Oracle security setup is more cross-schema friendly. The other problem was being prompted on bind variables early on but again, the SET DEFINE OFF option that was added solved this issue.

A Nice Bonus Feature

When writing this post at one point I noticed a game of Asteroids running. I did not know where it came from, whether it was an easter egg somewhere, a virus, or an accidental launch of a game I did not know I had. After a minute of research I realized that it was an easter egg game that pops up from Schema Compare's About screen if it is left open for at least 30 seconds or so. Perhaps I should not give that surprise away but it is a fun game and a cool little nugget. Now if only this could be launched during a database compare to pass the time :).


Competition makes for better products and I have tried a handful of Oracle schema tools so far. I am sure there are other good tools in this space and that further use of and updates to the tools I have tried may change my outlook. What follows are my thoughts on similar tools and how Schema Compare stacks up in my opinion.


Toad is what I started out with since it is what I already had. It may be the most common way of doing this, executes quickly, and has the most functionality and flexibility. Toad allows comparison of far more object types than Schema Compare (with many more options) so Red Gate's tool will not detect differences to less common items such as queues, jobs, DB links and the like. Toad's extra features do come at the cost of "complexity" and you will rarely hear me say the words simple, intuitive, friendly, modern or "standard Windows experience" when talking about Toad. The exact opposite is true with Schema Compare and other Red Gate products.

The main roadblock with Toad was that the change script it generated was display-only so I could not do anything with it. I tried some clever ways to hack around that limitation but it wasn't worth it or fair to Quest :). The Toad DB Admin Module is required to copy / save / use the sync script. The price I heard for that module is around $800 for one user but I believe it is also tied to the version of Toad being used; our 9.X DB Admin module key didn't work in 10.X. The price is not prohibitively expensive for all the module provides but with just wanting one feature, it is harder to justify with no a la carte option.

Corporate spending requests can be difficult to get approved and with Quest maintenance support contracts, keeping the functionality active across Toad versions could be a roadblock. Further the Quest websites make no real mention of how you actually get this module if you want it. You pretty much have to figure out who to call and hope you do not get stuck with a high-pressure, low-tech salesman. Schema Compare is currently priced at $395 for a single user; I was fortunate enough to get a free license due to the level of feedback provided on the product.

The other Toad issue previously mentioned is that certain combinations of options generated Oracle errors in my usage in the past. To be fair, some of those issues may have since been fixed and some may only appear depending on the schemas it is being used against.

Schema Compare did appear to have a fuller dependency tree over Toad in terms of script ordering and deploying related objects. Toad claims it does the same even if it may be a bit more under the hood; in practice it was difficult to compare the results between products here so take this with a grain of salt. Schema Compare's Deployment Warnings, Conflicts, and general experience did seem friendlier than Toad's "better know what you are doing" feeling.

At this point I would be remiss not to mention Quest's Jeff Smith (blog | Twitter) who is an excellent technical Toad and Oracle resource and can be a helpful buffer with the Sales team. Our relationship is based entirely on my complaining about Toad over Twitter and he has been most gracious in helping and putting up with me.


Tool for Organizing Your Schemas (TOYS) is a lightweight, functional Oracle schema compare and sync tool. The main advantage for many is that it is free. On initial use I ran into some sort of configuration issue. The tool claims to be intuitive but I did not find that to be the case, at least not compared to Schema Compare. TOYS is no-frills but certainly provides value. One deal breaker for my use was that TOYS does not provide any way of selecting which objects get synchronized and workarounds such as editing the SQL script were not acceptable to me. I also did not like some of the generated SQL I found such as unnecessary object DROPS when CREATE OR REPLACE would suffice. Other limitations in the tool's documentation gave me pause as well. Some level of support is offered with the app but I would not expect a high level of support with a free tool.

Devart dbForge Schema Compare

DevArt's dbForge Schema Compare sells from $150. I did not get a chance to really use it as it did not work on a 64 bit OS, at least not with TNS connections. That immediately turned me off the product, combined with a bad taste in my mouth from prior use of some Devart products to use Entity Framework with Oracle. When the x64 issue is fixed I may try the tool again.

Completing the Picture

I also evaluated Red Gate's Data Compare during this process. A review here is out of scope but I found it worked nicely and complemented Schema Compare well. Unlike Toad it does not require setting up annoying database links and I found it much friendlier to use for comparing data. If you already own Toad however you get the functionality for free whereas Data Compare is another $395. If you buy both products I believe there is a discount but at that point the pricing is equivalent with Toad's DB Admin module. Toad's export as merge statements can also be used as a quick and dirty form of data comparison and sync.


Overall I am quite pleased with Red Gate's Schema Compare. The tool is in a good middle ground to me; it is not the cheapest nor the most expensive, not the most powerful but certainly not the least. Schema Compare gets the job done and in the simplest and most intuitive fashion. I cannot express how great Red Gate's team has been to work with; Tom Harris and Red Gate's team have provided an excellent level of customer support. The app could still use some more performance improvements, at least for older databases. If some additional features can be added without hurting performance or the application's ease of use, this tool will be even more attractive.


  • 10/17/2011 - Updated information on the Schema Compare vs Toad comparison in regards to the dependency tree and script ordering.

Slow OracleConnection Creation

This afternoon I executed a simple DDL statement with Oracle.DataAccess.dll and noticed it took upwards of 30 seconds when it should have come back in a second or so. In stepping through the code I noticed the large delay was in creating the Oracle Connection, not connecting to the database or executing the SQL:

Oh Oracle you are the snake to my mongoose, or the mongoose to my snake. Either way it's bad.

I vaguely remembered a similar issue years ago that had something to do with a logging directory not existing. I no longer had the email I sent with the details nor could I find the information online. So I fired up Process Monitor and found several writes to the below files:
In inspecting the log files I found hundreds of occurrences of the below error:
Directory does not exist for read/write 
[C:\app\username\product\11.2.0\client_2\log] []
This was further confirmed in Process Monitor entries:
1:52:13.8086867 PM	OracleScriptExecutor.vshost.exe	8556	CreateFile	
C:\app\username\product\11.2.0\client_2\log	NAME NOT FOUND	
Desired Access: Read Attributes, Disposition: Open, Options: Open Reparse Point, 
Attributes: n/a, ShareMode: Read, Write, Delete, AllocationSize: n/a
At first I just tried creating the Log directory but that didn't work. I checked security on the directory but it seemed sufficient. To resolve the error I had to create the folder structure "log\diag\clients" in my "C:\app\username\product\11.2.0\client_2\" directory.

Afterwards I could get through the OracleConnection ctor in a reasonable time:

With the details in hand I changed my web search and discovered this Oracle forum entry posted 3 years ago! 3 years and this problem is still not fixed? Come on Oracle, get it together.