Archive for June, 2012

“Mr. La Forge, Engage!”

Friday, June 29th, 2012

Several recent events have brought associate engagement to the fore-front. Since the quite frank and refreshing conversation that out CIO had at the associate breakfast a little while back, I’ve looked inward to find where I was engaged and where I wasn’t and why. The classic definition of engagement doesn’t fit this conversation, however I did find this:

Employee Engagement[Business Management Concept] An “engaged employee” is one who is fully involved in, and enthusiastic about their work, and thus will act in a way that furthers their organization’s interests.

At work I always act in a way that furthers my company’s interests, mainly because being an employee ties my future with the company’s. When the company benefits; I benefit. But I can say that there have been instances where I felt… less than enthusiastic. I worked hard non-the-less, but it felt like a weight on my shoulders. Did the work get done?…Yes. Was it a quality job?…Of course. Was it on-time?…Always. But, all the same, by the time I got home I was beat, mentally and physically. My wife would force me to “leave work out on the tree.” Which of course made it that much heavier.

Here are instances I’ve seen where a little recognition could provide quite a boost to associate engagement with very little cost:

  • Being backed up – When your supervisor has your back and writes a wrong or injustice…That’s engaging!
  • Recognition from your boss’s boss – A personal email from your director goes a long way: Let’s you know that your work makes a difference and was noticed.
  • Role expansion – Being allowed to expand one’s role is very engaging.
  • Idea investigation – If you have an associate that has an idea that could potentially benefit the company, create an environment where it could be explored by the associate. Make sure that when it works and is beneficial, that associate is in the front getting credit…engaging!
  • Career coaching – Work with the associate to discover where their career is going and what it could become. Associates that can see their future are engaged.
  • Comp time – If someone has worked extra hours, and/or through the weekend, giving a day or two off is a great way to allow the associate to recharge…..and get enhanced engagement.

There are many others…and the best part is all you have to do to find some is ask. I guess there is nothing more inspiring, at least in my mind, as getting a little recognition for your contributions and being involved in shaping your environment.

An observation if I may: Sometimes it only takes one act to disengage a highly engaged associate…Tread carefully.

So…When was the last time you were recognized for your contribution or allowed to make a difference? When was the last time you acknowledge someone for their contributions?

First step of Automation: Process Inventory

Friday, June 29th, 2012

One of the first steps to initiating a maturity model program is to take an inventory of what processes exist. There are generally more processes than initially thought in any unit and it takes a bit of time to sit down and hash out what they are.

  1. proc·ess/ˈpräˌses/

    Verb: Perform a series of mechanical or chemical operations on (something) in order to change or preserve it: “the stages in processing the wool”.

    Noun: A series of actions or steps taken to achieve an end. 

 With this definition in mind, how many processes (noun form of definition above) does your unit have? Probably more than you think. Anything can be automated as long as you understand the process steps and you have the necessary resources to accomplish the automation. Let’s take a look at a simple example; Spending Accounts (SPAA) Database Management. Keep in mind this is just an example and not an all-encompassing, cohesive list. Here are some of the processes that exist for managing the database in my old position:

Area Task
Disk Management Database Growth Estimation
Disk Management Performance Monitoring
Build Management Data Refresh
Build Management Database Builds
Build Management Script Builds
Turn Process Build Validation
Turn Process TCW Preparation
Code Management Code Reviews
Code Management Source Control Audits
Code Management Code Branching

 This is a short list of all the processes that are required, but I’d like to keep it simple for this exercise. Next, let’s categorize the processes into major and minor effort (i.e., how much resources does it take to accomplish – this could be considered resource savings or the REWARD), the dependency type of process (internal, external, mixed – i.e., will I need outside resources to finish the task?), and automation effort (i.e., how much resources would it take to automate – this could be considered the cost or the RISK). This is a much tougher task with a lot of variability. It is best done with a small group of knowledgeable associates; any group larger than 4 is going to lose focus and the discussion will breakdown pretty fast.

So let’s say this is the result (keep in mind this is hypothetical):

Area Task Current Effort (Hours/Month) Dependency


Automation Effort

(Effort Months)

Disk Management Database Growth Estimation 1 Internal 2
Disk Management Performance Monitoring 10 External 6
Build Management Data Refresh 60 Mixed 3
Build Management Database Builds 40 Internal 2
Build Management Script Builds 30 Internal 2
Turn Process Build Validation 4 Internal 2
Turn Process TCW Preparation 4 Mixed 1
Turn Process TCW Turn 4 External 2
Code Management Code Reviews 20 Internal 10
Code Management Source Control Audits 4 Internal 1
Code Management Code Branching 4 Internal 1

 So with these numbers we can start the evaluation/prioritization step. Remember, we are looking to get the biggest bang for the buck. There are many measures you could you, but the one I like to use is the Breakeven Point; at what point did the effort payoff? I understand that this is highly subjective and imprecise, but you have to start somewhere. Here are the results:

Area Task Breakeven

Point (Months)

Disk Management Database Growth Estimation 320 7
Disk Management Performance Monitoring 96 6
Build Management Data Refresh 8 2
Build Management Database Builds 8 1
Build Management Script Builds 11 3
Turn Process Build Validation 80 5
Turn Process TCW Preparation 40 4
Turn Process TCW Turn 80 5
Code Management Code Reviews 80 5
Code Management Source Control Audits 40 4
Code Management Code Branching 40 4

 So I can see that Data Refresh and Database Builds are taking the most of my time. They just happen to also be the fastest payoff for an automation investment, so I’ll target them first for automation. I also notice that while these tasks will require the same effort to automate, one has mixed (external) process dependencies so I might want to lower that tasks priority until I can arrange the necessary outside resources time.

Process improvement is a never ending endeavor: On a regular basis (e.g., once a quarter) this exercise should be conducted again to see what has changed and how it impacts existing maturity efforts. Business processes change, new technology becomes available, associates come up with new ideas for automation…These will impact what is possible and how they can be employed to improve the existing processes.

Next we’ll discuss breaking down a process targeted for automation.


Maturity Models

Thursday, June 28th, 2012

I really appreciate, in general, maturity models. They provide a frame work from which organizations can contemplative and improve, and in doing so provide a healthier environment to work in while becoming more efficient and profitable. Probably the most well-known is the Capability Maturity Model which evolved directly out of the software development process. There are some concerns with models like CMM in certain circles. But if you disregard the ideological hype and consultant-generating/copyrighted/patented materials, you can create a very cohesive structure that can effectively meet the necessary business objectives for a lower cost in a shorter timeframe.

Maturity models outline a framework that provides:

  • A starting point for process improvement.
  • A shared lexicon from which information can be exchanged.
  • A way to define and document processes.
  • A structure for prioritizing actions.
  • A way to measure improvement.

I’m not married to a particular maturity model and the belief that goes with it. Rather I prefer to follow a common sense approach that can fit the organization’s idiosyncrasies. Additionally, following an unofficial model allows an individual to institute changes from the bottom-up rather than the top-down.

When it comes right down to it, process improvement is having a process that can be documented, executed, measured, and then evaluated for improvement. The best way to accomplish this is to automate all the repeatable steps in the process.

When I look toward process improvement, I take the following path:

  1. I take an inventory of all known processes and assess the complexity of possible automation (the Risk) and the resources I could save (the Reward).
  2. I prioritize the inventory list based on the Risk/Reward factor and available/desired tools and resources.
  3. Using the prioritize list as a backlog of projects:
    1. Document and verify the process steps.
    2. Investigate and evaluate possible forms of automation, identifying which parts can be and can’t be automated.
    3. Generate the appropriate design documentation (blueprint) required to automate the process
    4. Develop the automation and supporting process measurements
    5. Test and validate the automation
    6. Integrate the automation into the existing process, updating the process documentation.
  4. Once the process(es) are automated, reports can be generated to evaluate the process, find trouble areas and provide feedback back into the backlog.

We’ll look at more details as we progress…The next entry will discuss the Process Inventory.

“I stand relieved.”

Thursday, June 28th, 2012

In a previous life, I spent countless hours watching gauges, taking logs and answering the bells on an engine order telegraph a couple hundred feet underwater: Life on a submarine can be quite boring. Six hours of every 18 were spent standing watch on the throttles (big chrome wheels on the left). It was boring…very boring…mind-numbing, like watching paint dry boring. And it was one of the most important jobs on the boat, because if you did not pay attention, if you didn’t follow the proper procedure and documentation, if you did not give the proper turnover to your relief then a catastrophic accident could occur. So it was vitally important to give a proper turnover.

I can hear it already, “What’s a turnover?” A turnover in the Navy, what we would call a transition in in Corporate America, is a process by where you provide information to your relief (replacement) for the watch station (position) that you currently hold. Kind of sounds mysterious, but is conceptually quite simple:

  • You outline the current state of the system(s) that you are transitioning.
  • You detail any anomalies that has occurred since you took over (or a sufficient period such that any relevant events can be considered).
  • You review any existing documentation.
  • You review most important standard operating procedures.

Once you have covered this information, you ask, “Is there anything else you wish to cover?” If there are additional questions you answer them. If not, “I stand relieved of the watch.” Sounds easy, but the details can get quite messy. For example, before I moved to the Data Warehouse group I created this transition document that outlined those items my replacement would need to know to be productive:

Application DBA – Transition Plan

  1. Database Server Management
    1. Disk Space Management
      1. CA Tickets
      2. Database Growth Estimation
    2. Performance Monitoring
  2. Database Build Management
    1. Data Refresh
      1. Redgate
        1. SQL Data Compare Projects
        2. SQL Package Projects
      2. Final Builder Project and Schedule
    2. SQL Auto-builds
      1. Final Builder Project and Schedule
    3. TCW Builds
      1. QA Build Process
      2. Prod Build Process
    4. Metadata Environment
      1. DDL-only Build
  3. Turn Process
    1. Pre-Turn Validation
    2. Post-Turn Validation
    3. Damage Control – Turn Gone Wrong
      1. Discovery
      2. Correction
      3. Cleanup
  4. Database Construction
    1. SQL
      1. “Do No Harm” Scripts
      2. True One-shots
    2. SSIS
      1. Construction
      2. Operation
        1. Database Server
        2. App Server
  5. Control/Meta Data/Source Code Management
    1. SQL
      1. TFS Management
    2. SSIS
      1. Construction
      2. Operation & Management
    3. Configuration Data
      1. ORCA Files
      2. CMD Files
      3. CONFIG Files
    4. Control/Meta Data
  6. Database Consultation
    1. Strategy
      1. Technical Roadmap
    2. Tactics
      1. Database Design
      2. Query Optimization
  7. Environmental Management
    1. General Server Health Monitoring and Management
      1. Architecture Database
        1. dbo.SQLPerfMon – Stores the aggregated historical data from the query cache; captured every hour.
    2. Technical Story Creation
      1. General System Improvement
      2. Database Specific Improvement

Environment Management Team – Transition Plan

  1. New FTP Server Migration
    1. FTPNEWPRODSERVER – Production
      1. Replaces FTPPRODSERVER
      2. DNS – FTPProd
    2. FTPNEWTESTSERVER – Lower Environments
      1. Replaces FTPTESTSERVER
      2. DNS – FTPProof, FTPQA, FTPTest, FTPDev
  2. General Monitoring
    1. Messaging
    2. ISO Listener
    3. Process Tracker
  3. Environment CA Ticket Resolution
    1. Research
    2. Procedural Resolution    
  4. Metavante Management
    1. Password Reset
    2. Outage Management

Needless to say that is quite a bit of information….and this is just the outline. Almost all this information was held in a multitude of documents in the document repository. It was necessary to understand all of this information to properly manage the SQL Server environments. Really, this was the MINIMUM NECESSARY INFORMATION to transfer. I could have dumbed it down, but then I’d be doing a disservice to my fellow associate and not leading them toward perfect service.

And that is the crux of this discussion: Leadership….Anyone at any time has the ability, and responsibility, to lead. And it is when you exhibit leadership that everyone benefits; management, customers, fellow associates and shareholders.

Many lose site that leadership is an exhibited attribute of a person, not the authority of a position. They similarly confuse the action of leading with the act of dictating. This can’t be further from the truth. In the simplest term a leader is someone who is followed by others. So if you are responsible for handing off a system(s) to another person (group, unit, etc.) you are a leader…Act accordingly.

App DBA? What’s that?

Thursday, June 14th, 2012

Over the years I’ve been a DBA on and off, depending on what my employer at the time needed. Some IT shops do not even have a DBA. They have either a developer who thinks they are qualified because they know SQL, or a network engineer who sat through a MCSE boot camp course and knows how to use Google. Neither of these are a skilled DBA. So what does it mean to be a DBA and why bother having the skill around?

I’m going to restrict my comments and analysis to the Microsoft platforms, SQL Server. I’ve worked with Oracle, Sybase, Ingress, Informix and a couple other now defunct platforms, but SQL Server has been my concentration for years and that is what I know best. Regardless, I think they would apply to any of the other platforms.

You can prowl the web and find all kinds of different DBA’s, with the appropriate description and caveats. So as not to get all tripped up on titles, I just simplify the list down to these three:

  • Data Architect – This position is logical/theoretical in nature: They envision the future. They translate the strategic goals of the company into tactical plans that can be implemented by the DBA’s. They look into the future 3 to 5 years, driving the technological direction. They outline the flow of information company-wide and design the overall logical model(s).
  • Production DBA – This position is technically focused on platform installation, maintenance, and health: They establish the infrastructure. They are the gatekeeper that ensures that the intellectual property of the company is safe, secure, and available. They work across servers and databases to monitor health and performance, tuning as appropriate. They work with the application DBA as necessary to help the business units achieve their goals.
  • Application DBA – This position, in contrast, is business focused: They implement the features. Usually integrated with the development/support team to help manage the intricacies of an especially complex application. They work within the database boundaries to ensure that data integrity, query performance, and overall database functionality work for the business process and are appropriate for the business goals.

These, of course, are just my opinions. But, an organization with these three positions in place would cover almost all the needed territory. You could conceptualize this as:

Vision -> Infrastructure -> Implementation

We seem to do very well with the first two: I know quite a few people over in Database Services for my current company and they do a good job of supporting LITTERALLY THOUSANDS databases; the numbers are mind-numbing and GROWING. And the support I’ve received from this group is to be commended. The amount of abuse that my DBA (who I think is the model of a Production DBA) took and just kept chuggin’ along is astounding. His group does a great job on Vision and Infrastructure.

But what about Implementation? Who is ensuring the proper use of the database features and ensuring quality down at the level of the developers? When I was given the App DBA position in when I first arrived I tried to find others from which I could model my effort. When I asked Darrell about it, he just laughed at me. So I sat down with my manager at the time, Bob Johnson, and designed the position for our unit. The APP DBA would be responsible for:

  1. Develops technical deployment plans, coordinates cross-team deployment efforts, and ensures smooth deployments.

    • Responsible for creation, maintenance, and process improvements for all releases deployment plans for all code bases.
    • Responsible for all coordination efforts with DBA Group, including completion of necessary processes, documentation and securing the necessary approvals.
    • Responsible for environment resource allocation and planning for all release efforts in all stages of SDLC.
    • Execute deployment plans in all environments for entire SQL code base, including supporting infrastructure (e.g., DTS, SSIS, replication, etc.).
    • Ensure all supporting documentation is updated to reflect release changes.
    • Ensure the integrity and health of environments during deployment.
    • Test and validate data script execution in all lower environments, track issues and manage correction for release.
    • Review, execute and capture results for production support data scripts.
    • Route all necessary documentation for SOX and resolution.

  2. Work with all teams and members to ensure functional, reliable, and secure environments are available.

    • Perform periodic review and control of access in all environments and user groups.
    • Manage, schedule and execute data refreshes across all environments.
    • Ensure all SQL scripts follow approved guidelines and are managed within TFS.
    • Identify, design and implement environment management process improvements.
    • Execute performance monitoring in all environments when appropriate and initiate proactive issue resolution.
    • Ensure all data is secured and in compliance with audit requirements and provide support for audit activities.
    • Work with DBA Group to plan and implement server and network infrastructure changes to ensure maximum uptime and minimum interference.
    • Monitor database performance and investigate anomalies; document issues, causes, and possible resolutions. Communicate issues to management, product owner, and developers.

  3. Conduct process identification, analysis, improvement and automation activities.

    • Develop, test and deploy tasks that assist in automation.
    • Validate structural changes and templates are design to support automation and process improvement.
    • Initiates code review process for code changes and complete agreed upon changes.
    • Document and perform analysis of current technical and business processes.
    • Design process and procedures that take into consideration future business needs and flexibility.
    • Design and develop automation taking into consideration the industry’s current technological and methodological directions.
    • Focus on adaptability and continuous integration to support Agile development.
    • Provide improved visibility of environmental management, deployment, and integration processes.
    • Investigate, determine, and implement the tools that are the best fit based on current and future business needs and constraints.

  4. Set the direction of database interaction though consulting and collaboration to ensure the flexibility, usability and scalability of database resources.

    • Create/maintain the design view of the database system which encompasses the ERD, data dictionary, and SQL object definitions.
    • Create/maintain the data flow diagrams and process view which addresses the security, performance, and scalability of the system.
    • Participates in requirements definition. Capture/maintain the architectural aspects of the database via ERD’s, data flow diagrams, and replication/index plans.
    • Perform analysis of current technical and architectural needs.
    • Design systems that take into consideration future business needs and flexibility.
    • Design databases taking into consideration technology industry directions such as master data management and universal data modeling.
    • Design physical database models that support the business need and maximizes cost effectiveness.
    • Define the methods used for scripting the database, produce templates, and train developers on their use.

  5. Provide database consultation and support for the development areas.

    • Collaborates with the development areas to ensure that database architecture strategies are clearly communicated and the hand-off to detail design occurs smoothly
    • Create prototypes and sample code/templates for use by the development areas
    • Produce blue prints and documentation related to the database standards and processes
    • Lead and provide coaching and mentoring in database code review
    • Provide coaching and mentoring to the development resources throughout the development process.

After we agreed to these items and started to put them in place, things got a lot smoother. So…Where are the App DBA’s? And what are they doing?

Accountability and the Mystery of the Missing Artifacts

Thursday, June 14th, 2012

It took me a few months after I first arrived to understand the entire IT lexicon at my new company. It took a little longer to understand the necessary artifacts and constructs required for proper project management. These are the necessary items to understand to properly fit into this company’s IT culture. These items produce a common ‘language’ from which communication and action work together to accomplish a set of goals. But what happens when the ‘language’ is missing?

Whether you participate in SDLC or Agile, there are a minimum set of artifacts that needs to be created and used to effectively communicate the project and ensure that it was completely correctly. These artifacts are important for several reasons:

  • Establishment of a baseline for understanding the goals and objectives of the project.
  • Creation of common definitions vital to effective communication between project participants.
  • Outlines what is, and is not, expected of each participant.
  • Provides measurements for success.

There are others of course, but from a project management standpoint having these items covered would go a long way to instituting a can-do environment. And it is can-do environments that consistently build the applications that a company needs to stay competitive and profitable….Which is in everyone interest: Executive, manager, associate, customer and shareholder.

So when artifacts are missing they jeopardize the successful completion of projects necessary for a company to stay competitive and profitable. This is an accountability issue. Even if a well establish standard is in place, if the project participants do not ensure that the proper artifacts are being produced, the project team is not being accountable:

1. (of a person, organization, or institution) Required or expected to justify actions or decisions; responsible.
2. Explicable; understandable

I will suggest that any time the prescribed artifacts are in absentia then some accountability is missing. And when accountability is missing, our path toward competitiveness and profitability will start to suffer. So it is in the interest of each participant to ensure that accountability is maintained. In doing so, you help the project and your fellow associates to achieve the company’s goals.

Techniques that can be used to ensure accountability for artifact creation:

  • Organization Level
    • Implementation of standard processes and auditing functions for project artifacts.
    • Ongoing associate training and engagement.
    • Programs for the team recognition of proper project management processes: Recognize benefit of associate accountability.
    • Remedial training/interventions for projects not following standard processes: Recognize harm of a lack of accountability.
    • Active monitor of projects, from inception to completion.
  • Project Level
    • Effective feedback from the project manager to participants.
    • Maintain effective communication to all project participants and appropriate organizational entities.
    • Participant enrollment and proper delegation of artifact creation.
    • Employ appropriate process and tracking features for selected methodology: Seek out necessary assistance.
  • Peer Level
    • Work within the team dynamic to identify missing artifacts.
    • Volunteer to assist in artifact creation.
    • Provide feedback using positive and enrolling communication.

I’m sure there are quite a few more techniques that we could add; these are but a few ideas to establish and maintain a culture of accountability.

So, what artifacts are missing from your project(s)? What are you doing to ensure accountability? What is that missing accountability costing?

Innovation Incubators

Wednesday, June 13th, 2012

I have a proposal. It might be considered quite radical on the face, but with a little backing we could add some serious value. And…believe it or not…we are already doing it, as least informally.

Companies should construct a formal unit to act as an innovation incubator. I see this as being different than, but complimentary to, a the Center of Excellence. It would reach out to the various engineering groups to see how common problems were handled, evaluate the results, and look to see if that solution could be made available company-wide. In fact we are already doing it, just with a lot less controls and a lot more risk. Case in point, TPS (known locally as either the Transformation Processing System or the Task Processing Service)…

TPS – A Precedent

TPS is a home-grown, Windows-based, .NET scheduling and batch processing system. It runs on a Window’s server as a service allowing a multitude of functionality to be orchestrated (grouped together and run in order). It can be expanding both internally (changing the TPS code base) or externally (adding/altering the interface calls). It has the capacity to manage errors, send email, page on-call, etc… It tracks intermediate steps and gathers quantitative values for later analysis and reporting. In short, it is a fairly rugged, home-grown scheduler that is versatile and expandable.

We were not the only unit in need of similar functionality.  Another unit was looking for like functionality, but was unable to purchase a commercial scheduling tool that would fit the bill.  They reached out to us to discuss the use of TPS. The long and short of it is that after several technical discussions, we collectively drew up an agreement and a branch was created in TFS to allow this other unit to have their own version of TPS. And as of this writing, TPS is a vital tool for both units and the company.


Now that the precedent has been set, where to from here? I would suggest that the following construct could take TPS to the next level:

  • Talk to existing users: What works? What could be improved? What is missing? …e.g. Stakeholder Engagement
  • Set out roadmap for tool generalization, adoption, enhancement, and rollout.
  • Reach out to perspective users through existing venues (ACOP, EA, etc.).
  • Allocate a subset of budget dollars from current and interested parties.
  • Assign a group with both the experience with tool and available resources to ‘own’ the product and enhance it based on the roadmap.

Expansion and leverage after that would bring flexibility (because we could change it in-house) and economies of scale (build it once, use it multiple times). As more groups use the tool, Humana gains ‘royalties’ on its own product in the form of saved budget dollars. This can be measured and tracked. There is not much difference between this and, say, Application Blocks. (Has anyone ever tried to measure the benefit of Application Blocks against budget dollars?) There is the possible cultural issue of disregarding tools that others have created, but cultural change is a top-down driven issue that could easily be managed by the endorsement of this endeavor, very much like the Agile initiative. When people see the natural advantage, they will come to see its wisdom.

Think Ideas, Not Items

This thinking can extend to almost all aspects of the IT business. A process could be treated the same as a tool. Keron Salmon is using the same process over and over again to automate .NET builds for out unit , Billing, Mobile, and a couple others. What could be do if we proceduralize, document and evangelize these configuration management processes like we evangelize Agile? What other things could be enhanced and leveraged to provide value? Well, just going off the top of my head I could think of these…

  • Tools
    • Object Messaging (A lot like Tuxedo, except without all the overhead.)
    • Service Monitor
  • Processes
    • Automated .NET Builds (I don’t believe HUMBUILD has all the capability that is being used in Spending Account, and other, builds.)
    • Automated SQL Builds
    • Automated Data Refreshes
    • Automated Testing

There are many, many more. What can you think of that could be adopted widely and add value?

Data Refreshed & Metadata

Tuesday, June 12th, 2012

Data refreshes are a vital part of preparing an environment for testing. During any rigorous testing phase, two things occur to the data in the test environment: 1) the data gets modified (or even mangled) when testing the extremes of the possible data ranges; and, 2) the data gets stale (further from the current production state). This also occurs in the development environment, but to a lower extent. Normally all you have to do is grab a production backup, push it down to the target environment and overlay the immediate build. But what if you have intermixed environmentally sensitive data with the regular data?

“What the heck is ‘environmentally sensitive data’?” I can here you say. OK…here’s the scenario: You have tables in the application’s database that contain URL’s to the environment’s web service host. You restore the production back onto the testing server, but now you have production URL pointers in the test environment. If testing proceeded without changing the URL’s to the test web service host there could be major issues including, but not limited to:

  • Invalid testing results.
  • Misreported defects.
  • Updating production system during testing.
  • Inappropriate outbound communications. (Yes this does happen!)

What other types of data could cause similar issues: Email addresses, redirecting URL’s, server names, drive URI’s, account numbers, etc. In fact, up to the time I left Spending Accounts, new types of metadata were being integrated into the databases every release. Each new piece of metadata increased the possibility of an issue turning into a full blown production support incident or a PR problem.

Needless to say, we would want to minimize these types of occurrences, so ensuring the correct metadata is in place is vital to both release testing and system integrity. So with this in mind I set out a strategy that would be generic enough to work regardless of the actual platform or number of environments.


The construct’s rules are as follows:

  1. Data is only pushed down to a lower environment (Production -> UAT -> Testing -> Development).
  2. Code is only pushed up to a higher environment (Development -> Testing -> UAT -> Production).
  3. Metadata is maintained only in the environment that it is targeted for.

Sound easy enough? In general, it is pretty easy to grab a backup from a production database and restore it to a lower environment (taking into consideration any size limitations). And it is also pretty easy to overlay a backup with whatever builds (source code) that has not yet been applied.

But the metadata….That is a little tougher. It must be updated ‘in-place’ as it were: You must save it off prior to restore of the production data, and then overwrite whatever was brought over from production. And to make it a little tougher, the records are not always a one to one match. You might also have some issues with surrogate key consistencies that must be taken into account. So when I first started to look at the possible solutions I could:

  • Write DML scripts to completely update the data: A little tedious, even if you write code to create the DML. I would have to generate the scripts before every refresh for every environment. Also there is the problem of interspersed metadata in primarily non-metadata tables…bother!
  • Keep a separate copy of all metadata containing tables, by environment, segregated from the other environments. Then overlay the restored database with the metadata. Sounds much simpler….But is it?

After a little research, I turned to an old friend…the SQLToolbelt from Redgate Software. Specifically, I used the SQL Data Compare tool. This allowed me to:

  • Create projects that allowed the polling and storage of data from a target environment to a common database.
  • Create projects that allowed the syncing of data from the common database to the target environment.
  • Prepare for command-line execution for open-system automation.
  • Create reports to a file (for both later auditing/accountability and concatenate into overall system health report).

I had SQLToolbelt certified here at Humana and we purchased a couple of licenses. We used this tool to automate the data cleansing and refresh process. Later on I’ll give a detailed outline of the actual data refresh process for Spending Accounts.

Primum Non Nocere

Tuesday, June 12th, 2012

One of the items I ran into when I took over as the App DBA for SPAA was the complete disregard of source code control for SQL objects. At times there could be two, or maybe three, different scripts for the same object in the same build. So when the build occurred, it was possible to overwrite change A with change B, which in turn usually resulting in a defect. It was frustrating and it had to change.

So to put an end to this issue we instituted formal source code control for all SQL objects. The rules were:

  • Each SQL object would have a single source code, or SQL script, file.
  • Each SQL object script would contain all DDL, including security grants, so that they would be whole and complete.
  • Each SQL object script would be written in such a way as to Do No Harm (or in Latin: Primum Non Nocere).

Following these rules allowed us to ensure that all changes to a single object were contained within a single file and that file would have a history that could be tracked. They also ensured that if the file was executed, it contained everything necessary to allow proper operation of the SQL object. Lastly, it would allow the script to be run on any environment, at any time, and insuring that the database would be in the proper state.

Do No Harm

The Do No Harm philosophy had two main paths: table scripts, and everything else. The reason the table scripts have to be treated different is that they are objects with permanent state. All the other objects (stored procedures, functions, views, etc.) are stateless. Regardless of which type of scripts was managed, all scripts followed this simple rule:

All DDL manipulation occurred within the confines of a controlled database transaction.

This ensured that the script worked completely, or returned the object to its original state.

So for the stateless objects the rule was drop and recreate. Doing this provided several advantages:

  1. You could query the system tables for when the object what created. This works well if you, say, want to verify a turn the Turn Center just completed.
  2. If the object didn’t exist, it would just create it and move on. Before it was possible to get an error when you ALTERed an object that didn’t exist.

For objects with permanent state (tables), you have to maintain the state. Therefore you have to alter in place…And this is where it gets tricky. So for table scripts the following rules were followed:

  1. Tables are created once, so you had to check for existence before creating the table.
  2. DDL could be added to the script, but existing DDL was never altered.
  3. As new DDL is added to the script, the appropriate requirement checks had to be made prior to execution. Say for example you wanted to add a column, that was segregated from the original CREATE statement and you checked to see if the column exists prior to ALTERing the table and adding the column.

It did take a little time to train the developers to follow this philosophy, but the payoffs were well worth it:

  • It allowed HEAVY automation, including automated SQL builds to the lower environments.
  • It reduced downtime during turns.
  • It reduced turn preparation.
  • It allowed automation of turn verification.
  • It reduced defects.

Now all we need is a starting code base….

Creating the Code Base

To establish the original code base files, we needed a way to create ~12,000 individual SQL scripts. No short order by any imagination. So we used the Do No Harm templates as a base to construct a program that read the system tables of the PROD databases. The program extracted all the necessary information to reverse-engineer the T-SQL to a file, then that file was pushed to the code base structure of TFS. Easy!

The question you have to ask yourself is this: Why isn’t everyone doing this?

“Leadership is action, not position.”

Monday, June 11th, 2012

I spent the first decade of my adult life in the US Navy. I went to interesting schools and had some unique experiences. During this time I was exposed to the concepts of leadership and management. But more than that, the Navy made a direct connection between the two concepts; leadership and management worked hand in hand to complete missions and maintaining preparedness. Every officer, commission and non-commissioned alike, was expected to lead AND manage. Because both leadership and management are skills they can be taught, honed, and mastered, the US Navy spent considerable resources teaching those skills.

The reason the US Navy taught both skills is that they have different characteristics and applications. One of the better articles on the subject can be found here, but I summarized the highlights below…

Leadership is:

  • Vision
  • Followers
  • Inspiring and Charismatic
  • Charting a Course
  • Empowerment
  • Focused on People

While Management is

  • Efficiency
  • Subordinates
  • Productive and Efficient
  • Following a Map
  • Control
  • Focused on Task

When I look at the differences, I can see why they taught both. What I see is synergy, and I’m not the only one. And when I look back at all those who I thought we good as a leader or manager, they always had both skills firmly tucked in there tool belt. When one dominates the other, the overall mission may become compromised.

After leaving the US Navy, I entered the corporate world where management was discussed ad nauseum, but I didn’t hear half as much about leadership….At least that was the impression I got. There was definitely leadership at the executive level, but I always felt that management predominated at the level I operating in. Is that a good thing? I think not. Leadership is strategy, management is tactics: To win in any struggle, both have to be properly employed.

What leadership and management traits are you exhibiting? Do you have a balance? Take this test and find out.