A New Adventure…

I’ve added a few new acronyms to my life this past week:  BMI, BNA, and TN.

I’ve accepted a position at Broadcast Music Inc. (BMI), which is in Nashville (BNA) Tennessee (TN).  This will require a move from Charlotte to Nashville.  Something we are going to have to work on over the next few weeks.

I will be a Senior ETL Developer at BMI and I am very excited about this new position.  They have a ton of ETL packages to move to SSIS and the Microsoft stack.  It will almost certainly involve BIML because of the sheer number of packages and data feeds.  It sounds awesome.

So this One Man is off to mow a new meadow once again.  Say a prayer for me and wish me luck.  Thanks!

Biml Tutorial 1 – Introducing BIML

Intro to Automated Package and Template Generation with BIML

In order to simplify and automate the template creation and package creation process, we will create a Proof-of-concept (POC) that uses the Business Intelligence Markup Language (BIML, pronounced bim-ul).  BIML provides a language with which to generate SSIS packages using highly configurable metadata.

Business Intelligence Markup Language (BIML)

BIML has been a free component of BIDS Helper for years.  BIDS Helper is a well-established open-source product that has been part of the SSIS developer’s toolkit through many versions of SQL Server, up to and including SQL Server 2014.  BIML is an XML-based language, with code embedded in the XML document to perform specific functions.  It looks and feels a lot like working with standard ASP pages.


BIDS Helper is available for SQL Server 2008R2 through SQL Server 2014 and Visual Studio 2010 through Visual Studio 2013.  For Visual Studio 2015, SQL Server 2014 and SQL Server 2016, a new tool, BIML Express is available.  Like BIDS Helper, it is a Visual Studio add-in.  However, BIML Express gives us the ability to edit BIML pages inside Visual Studio, with Intellisense and Syntax-Highlighting, something that was very awkward with BIDS Helper.

The high-end editor for BIML is Varigence’s Mist product (available for $249 per developer per month).  Varigence also hosts a free online editor called BIML Online.

If your development shop can afford it, Mist is the way to go.  If not, you will be using a combination of Visual Studio 2015 with BIML Express to edit/update BIML files, BIML Online to reverse engineer *.dtsx packages into BIML files, and Notepad++ (with the XML Plugin).

SSIS Template Generation

BIML works to automate template generation by using a combination of metadata (stored in SQL Server tables) and C# code, stored in the *.biml files themselves.

The basic structure of a package in BIML is as follows:


You can see the tag that declares the Package Name.  Then it lsample-generated-packageists any package level .  (Project level connections are automatically available.)   Then it lists the that are declared in the Package.  Then we see the Control Flow , which we have divided into three s.  Within each container are the Control Flow for that container.  In the “Main Control Flow , you would put the logic for this particular package, both Control Flow Tasks, and Data Flow Tasks can be defined in this container.

If I run the BIML script above, it will output a standard *.dtsx SSIS package, shown at right.

Given the static XML above, and the standard package at right, what if I wanted to Generate several packages, each with a different name, but conforming to all our standards?  To accomplish this with BIML, we first introduce a table in the BIML schema that will hold package specifications, and then we modify the XML above to use parameters to generate the packages.

The table could look like this:


The change to the BIML file is to add some C# code to loop through the packages in the metadata table defined above, and then to add some BIML specific tags that evaluate the expressions.  The tags are written inside the quotes for the attributes like Name and ProtectionLevel.  Like so:


Why Bother?

Why bother with the parameters?  Why not just run the script, rename the package, run the script again and rename the second package, run the script again and rename the third package?  (Sounds tedious, doesn’t it.)  The reason we bother is so that we can generate a lot of package templates, and then regenerate the package templates when the ETL Framework is updated.

For example, to accommodate a new method for delivering notifications, say we modified the ETL Framework a little.  Without the metadata for BIML to use, we could ignore this change and only apply it to template packages going forward, using the script and rename technique.  Or we could manually edit each package in our solution to retrofit it so that the framework change can be used in the existing packages as well as new ones.  Or we could regenerate all the packages with BIML, and then edit each to copy the Main Control Flow container from the existing package to the new one.  This is, debatably, a better way of handling changes to the Framework, but it still incurs the tedium of copying the Main Control Flow container from the old to the new.  Not ideal.

Perhaps we could generate all or most of the main package, and move the custom tasks into a separate, child package?

It turns out, we can.

SSIS Package / Template Generation

Most packages do much the same thing.  They fetch some data from some source and load it into a database into a staging table.  Then they perform some transformations and load the staged data into some final tables.  The transformations in the middle might be custom, but the rest is not.  The Extract from the source to the staging tables can be configured with metadata and coded for in BIML.  The Load from intermediary tables to the final tables can be configured too.  The only bit that is data specific is the Transformation happening in the middle of the process.  If we isolate that in its own package, we can fully automate the master package generation and automate the child template generation.

This is accomplished by capturing connection data for the different sources, mapping data from the source to staging table, and mapping data from the intermediate table to the final table.

A simple data model can accommodate the metadata we need.  The Connection table needs to contain all the parameters for the different kinds of connections we might encounter, and BIML will identify which columns are needed for each connection type.  Packages need to know which connections they use.  The Extract into staging tables needs to know the field mappings from source to target.  Likewise, the Load from the intermediary tables to the final tables needs to know its field mappings from source to target.

Continued here:  Introducing BIML – Continued

The Common ETL Framework Part 2


Each package is responsible for logging at two different levels:  Audit and Debug.

The Audit level of logging needs to record the following for each package execution:

  1. Start Day and Time
  2. Executing Machine name
  3. Executing as Username
  4. Records Affected (if appropriate) – Can include Records Inserted, Updated, Deleted, Archived, Errored, etc.
  5. Error data if an error occurs – Includes SQL Error, Call Stack, etc.
  6. Presence of Bad Data, if any, with specifics
  7. End Day and Time

The Debug level of logging records all the Audit level items, plus Control Flow Task level logging, plus reporting the Bad Data, if any, with specifics.

In SSIS, this level of logging can be handled by adding Package and Task level event handlers.  The BIML code to create them looks like this.  There are several placeholders in this code that will be expanded upon in a later post.

Package Level Event Handlers:


Main Control Flow Level Task Event Handlers:


Error Handling

There is a Package Level Error Handler in the example above.  Basically, we need to capture the errors at the package and task level and log them to an Audit.PackageExecutionError table.  Note: Bad Data that is being redirected to an Audit.BadDataLog table is not going to raise an error.  The Framework needs to be able to handle both sorts of errors and report on either of them in the notification portion of the Framework.  The Bad Data data flow is shown in the diagram below.



Depending on the Success, Failure/Error, Unusual Behavior, or Bad Data condition of the Package or Task, there are different people that need to be alerted.  The first pass of implementing the Routing will look at SMTP Email as the notification routing mechanism.  A later pass might look at HTTP or Web Service notifications.

All notification contact info will be in the Audit schema and will be maintainable through simple table updates.  Email addresses and notification recipients must not be hard coded into the packages, neither by a developer nor by BIML.  Notification recipients must be determined dynamically at run-time.

In the Success condition, an optional statistical email to the specified notification recipients would suffice.  This email would show the Success of the Package and include Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, and the Affected Row Counts.  Notification recipients should be configurable to receive notifications Daily, Weekly, Bi-weekly, Monthly or Not at All.

In the Failure/Error condition, a detailed error message needs to be sent to the notification recipients that are indicated as SendErrorNotice in their contact profile.  Every package must name at least one SendErrorNotice contact.  These contacts will receive an email that shows that there was an error, the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, Error Message(s), Stack Trace (if available), Bad Data Condition (if there is one), and as much other information as we can harvest from SSIS.  The difference between a Failure and an Error is that a Failure is expected in certain circumstances, but an Error is not expected.

In the Unusual Behavior condition, a message needs to be sent to the SendErrorNotice recipients, as in the Failure condition above.  However, the Unusual Behavior notification will document the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, and Bad Data Condition (if there is one), as well as documenting the Unusual Behavior that triggered this condition.  For example, a package normally processes 100,000 rows, if it suddenly processes 500,000 rows, this would be Unusual Behavior.  Likewise, for a package that typically takes 3 minutes to run, to take 20 minutes to run would be Unusual Behavior.

In the Bad Data condition, the package has not Failed, Errored, or experienced Unusual Behavior.  It has experienced rows being routed to the Bad Data path(s) in the Data Flow.  Details of the Bad Data need to be sent to the notification recipients that are indicated as SendBadDataNotice in their contact profile.  These notifications should include the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, Bad Data Row Counts, and Bad Data Row Details.


Multi-threading allows the package to run faster when there are tasks that can be run in parallel.  There are two multi-threading conditions that the Framework needs to handle:

  • Master Package Multi-threading of multiple child packages
  • Package Task execution within a package

Re-run ability

Packages should be re-runnable if the data has changed and we need to reload it.  For example, bad data was detected, logged and handled.  Now the bad data has been fixed, we need to re-run the package for today’s data.  This implies that the package would know that its last run had bad data, that the package actions need to be reversed and then the package can reload the data for the day.  This is most often implemented as a clean-up step at the beginning of the ETL package that makes sure that none of the data it’s about to put into the database is already there and removing it if it is.  For example, the package was loading data into the final tables when the database ran out of space and threw an error.  Once the space issue is fixed, the package would be re-run.  First, it would remove the data it loaded in the first run, then it would load the data normally as if it had not been run before.


Packages should be able to pick up from the task prior to the failure point and run properly (where this makes sense).  There’s no need to stage data we’ve already staged.  Typically, when a package fails, it has already done some processing.  Packages should be able to roll back the failed task and restart from that point.  This is an advanced ETL framework activity.  It can be very tricky to deal with restarting a failed package, especially if it has any complexity in it.

The Common ETL Framework Part 1

Defining a Common ETL Framework

SQL Server Integration Services (SSIS) provides a mechanism to Extract data from some source, Transform it as necessary, and Load it into the tables in a target database (ETL).

Each SSIS package should implement a common framework for logging, executing, monitoring and error handling so that these details are abstracted from the SSIS developer and are “just part of the template” that they should use for SSIS package creation.

Package automation takes this one step further by automating the creation of packages based on metadata that is configurable at design time.  I have selected Business Intelligence Markup Language (BIML) to perform this automation.  I will demonstrate how to use BIML to configure and generate packages in future articles in this series.

What Does the Common ETL Framework Need to Do?

The Framework needs to provide the services and behaviors that are common to all ETL packages.  These include:

  1. Auditing/Logging.  Execution Start, Stop and Error auditing for packages and tasks as they are executed.  Auditing/logging verbosity should be configurable so that we can enhance the logging when we need to debug something.
  2. Error Handling.  Appropriate error handling. Reroute bad data on read and on write.  Break the process only when necessary.  Log data, connectivity and process problems.
  3. Notifications and Routing.  Intelligent routing so that the appropriate systems and system owners are notified of a failure without spamming the whole department.
  4. Multi-Threading.  Master packages need to manage the multi-threading and dependencies of their child packages, watching for errors and handling them appropriately (i.e. Stop or Continue execution of other child packages upon a failure).
  5. Restartability.  Packages should be able to pick up from the task prior to the failure point and run properly (where this makes sense).  I.e.  No need to stage data we’ve already staged.
  6. Re-run ability. Packages should be re-runnable if the data has changed and we need to reload it.  For example, bad data was detected, logged and handled.  Now the bad data has been fixed; we need to re-run the package for today’s data.  This implies that the package would know that its last run had bad data, that the package actions need to be reversed and then the package can reload the data for the day.
  7. Load Strategies.  The Framework should handle a variety of data Extract and Load strategies:
    1. Truncate and Replace loads. Where appropriate, table data can be truncated and reloaded.  This technique is common for staging tables and for smaller datasets with no dependencies.
    2. Incremental loads. Where appropriate, data can be extracted incrementally and/or loaded incrementally, rather than a truncate and reload.  This is often used to keep history.  Incremental loads come in several flavors, some use a hash, others a date flag, others a set of change flags.  Deletions require special handling in incremental loads.
    3. Cycle loads. Where appropriate, a dataset can be loaded with a Days Old flag set to -1; then a simple update can increment the Days Old flag by 1 to cycle the data by a day.  A delete query can then remove rows that are too old to be kept.
  8. Data Retention.  The Framework should also handle data retention during the ETL process.  There are three strategies for removing data from the primary dataset:  deletion, archiving, and flagging.  A typical retention plan will employ all three to varying degrees.
  9. Stop/Cancel Job.  The Framework should be able to interrupt a job, stopping it immediately, or canceling it at the next step.  This also implies that the job is already Restartable or Re-runable.
endmark endmark endmark

Redirecting Web Traffic from HTTP to HTTPS

So I need to redirect the incoming traffic to my website so that it always uses https/SSL.

I am running IIS 7.5 on my Windows 7 machine.  I googled for “how do I make sure that all my web traffic uses https“.

The second link showing was to my web host, GoDaddy.com.  They have a page that says how to do it.  But what they didn’t tell me was how much effort I would have to go to to get it to work.

First, I had to realize that the URL Rewrite Module was an IIS plug-in and not something that comes “in the box.”  I downloaded and installed the 64-bit version.  Once installed, I could see the URL Rewrite icon in IIS Manager.

Second, I added the code to my web.config file.  Here’s the code:

	<rule name="HTTP to HTTPS redirect" stopProcessing="true"> 
	    <match url="(.*)" /> 
		<add input="{HTTPS}" pattern="off" ignoreCase="true" />
	    <action type="Redirect" redirectType="Permanent" 
                    url="https://{HTTP_HOST}/{R:1}" />

Four hours later, it works…

Here are the gotchas I ran into:

  1. Things get complicated if you try to put the redirect in a subsite.  The Rule code above must be in the web.config file in the root website.  Not a subsite.
    (I never did figure out how you might do that.  In my case, it doesn’t matter because I want my entire domain to be running in HTTPS mode.)
  2. You cannot go to the SSL Settings screen in IIS Manager and check the option to require SSL.  That option must remain unchecked to allow the HTTP traffic through so the rule can redirect it.
  3. Your website must have bindings to both HTTP and HTTPS.
    1. In IIS Manager, select the root website on the left.
    2. Next, select the Bindings option under Edit Site in the Action bar on the right.Bindings
    3. Make sure you have bindings for HTTP (typically port 80) and HTTPS (typically port 443).
  4. You do not want to mess with the HTTP Redirect option in IIS Manager.  That will not solve the original problem. The HTTP Redirect is used to forward all traffic from, say, a parked domain, to another domain entirely.
  5. Did I mention that the Rule code above only works at the root level of the server?   Make sure it is in the web.config at the root level, not in a virtual directory or subsite below the root.

And that’s all I have to say about that.

Oh, one more thing.  Many thanks to RUSLAN YAKUSHEV ruslany at Microsoft for his excellent documentation and blog articles about the URL Update Module.

Hadoop Follow Up – Hortonworks HDP Sandbox

The Hortonworks Hadoop Sandbox download got corrupted the first time.  It worked fine the second time.


I installed Oracle VirtualBox first.  Then, in the Oracle VM VirtualBox Manager, I select the File | Import Appliance… option, selected the HDP_2.4_virtualbox_v3.ova file and clicked Next and Import.
Importing the HDP Appliance

A few seconds later, the box was installed, so I started it up.  After  loading and starting a ton of stuff, it seemed to stop doing things and the screen looked like this:
HDP Appliance Screen

Connecting to the VM

I dismissed the two messages at the top and tried a zillion things to figure out what to do next.  Nothing.  Then I read something in the Hortonworks Tutorial in the Hello World section of the Hortonworks tutorial site about the box’s address and how to connect to the Welcome Screen.  No wonder I couldn’t do anything inside the VM itself, the interface is web-based and uses the URL:  Entering that URL into my browser, I connected and saw this:
HDP Welcome Screen

Then I ran into difficulty because the firewall at work won’t let me download the tutorial files.  Ack!

My First Foray into Hadoop

So I have a big dataset (1.7 billion rows) that I want to analyze.  I figured, “Hey, Hadoop is all over this Big Data thing, I wonder if I can do a Proof of Concept?”

Compiling Hadoop on Windows (Ugh!)

So, first, I tried to follow some instructions on how to get the Hadoop source into Windows and compile it.  It turns out that Hadoop is Jave based and most Hadoop programmers are Java programmers.  So a lot of the instructions are in Java.  And, good for me, the build engine is Maven, which I happen to know quite a bit about thanks to the weeks at CompanionCabinet where I automated the build using Maven.

However, it turned out the Ant was having a problem with running the SH command and after several tries, I went googling for an already compiled version of the Hadoop project.  Low and behold, I found one on GitHub:  https://github.com/karthikj1/Hadoop-2.7.1-Windows-64-binaries.  In the middle of the top area of the page is a “1 Release” link.  Click there to download the binary:

Hadoop Binary

Installing all the bits

Based on the wiki article here:  http://wiki.apache.org/hadoop/Hadoop2OnWindows.

I found the link to this:  Building.txt

Near the bottom of that file, are some incomplete instructions on what to download, install and do to compile your own version of Hadoop in Windows.

So I downloaded all these:

  1. Java Developers Kit (JDK) 1.7.0_80, includes Java Runtime Environment (JRE) 7.
    JDK Download
  2. Maven 3.3.9.
  3. Cygwin 64.
  4. CMake 3.5.2.
  5. zlib 128.
  6. protobuf 2.5.0.
  7. Windows 7 SDK.

Then I installed or unzipped the files.

  1. JDK 1.7 is an install.  I let it install to Program Files\Java.
  2. I copied the Maven file to the Java folder and unzipped it to a new folder (apache-maven-3.3.9).
  3. I installed Cygwin to the Program Files\Java\Cygwin folder.
  4. I installed CMake and accepted the defaults.
  5. I unzipped the zlib 128 files to Program Files\Java\zlib128-dll.
  6. I unzipped the protobuf files to Program Files\Java\protobuf-2.5.0.
  7. I tried to install the Windows 7 SDK but it had issues, which I ignored and proceeded on since I wasn’t going to compile my own Hadoop after all.
  8. I unzipped the Hadoop files to \hadoop-2.7.1.

Then I did the following steps:

  1. JAVA_HOME must be set, and the path must not contain spaces. If the full path would contain spaces, then use the Windows short path instead.  In my case, this was:
    set JAVA_HOME=C:\Progra~1\Java\jdk1.7.0_80\
  2. I created a C:\tmp folder because I didn’t have one and, by convention, Hadoop uses it.
  3. I added the ZLIB_HOME environment variable and pointed it to C:\Program Files\Java\zlib128-dll\include.
  4. I added several items to the PATH variable:  C:\Program Files\Java\apache-maven-3.3.9\bin;C:\Program Files (x86)\CMake\bin;C:\Program Files\Java\zlib128-dll;C:\Program Files\Java\Cygwin\bin

With all that in place, I was ready to start Hadoop.

Starting Hadoop

Apparently I have to configure several files in the Hadoop\etc\configure folder first.

Section 3 on the wiki page describes in detail how to change the configuration files.

I combined that information with the steps found on this article to create the file system, create a directory and put my txt file there.

What’s Next?

I am not sure what’s next.  Looks like I have some learning to do.

This article gives a nice technical overview of Hadoop.

And then I discovered Hortonworks.  Hortonworks Sandbox is an open-source VM with Hadoop and a bunch of tools already fully configured.  So I downloaded this onto a different machine and am trying it out right now.  I’m going to try the VirtualBox VM.  I used VMWare Player and VirtualBox some time ago and found VirtualBox a lot easier to work with.  It looks the Hortonworks HDP Sandbox is going to take a while to download.  See you again on Monday.

In the meantime, I’m going to check out this tutorial on edureka.