A New Adventure…

I’ve added a few new acronyms to my life this past week:  BMI, BNA, and TN.

I’ve accepted a position at Broadcast Music Inc. (BMI), which is in Nashville (BNA) Tennessee (TN).  This will require a move from Charlotte to Nashville.  Something we are going to have to work on over the next few weeks.

I will be a Senior ETL Developer at BMI and I am very excited about this new position.  They have a ton of ETL packages to move to SSIS and the Microsoft stack.  It will almost certainly involve BIML because of the sheer number of packages and data feeds.  It sounds awesome.

So this One Man is off to mow a new meadow once again.  Say a prayer for me and wish me luck.  Thanks!

The Common ETL Framework Part 2

Auditing/Logging

Each package is responsible for logging at two different levels:  Audit and Debug.

The Audit level of logging needs to record the following for each package execution:

  1. Start Day and Time
  2. Executing Machine name
  3. Executing as Username
  4. Records Affected (if appropriate) – Can include Records Inserted, Updated, Deleted, Archived, Errored, etc.
  5. Error data if an error occurs – Includes SQL Error, Call Stack, etc.
  6. Presence of Bad Data, if any, with specifics
  7. End Day and Time

The Debug level of logging records all the Audit level items, plus Control Flow Task level logging, plus reporting the Bad Data, if any, with specifics.

In SSIS, this level of logging can be handled by adding Package and Task level event handlers.  The BIML code to create them looks like this.  There are several placeholders in this code that will be expanded upon in a later post.

Package Level Event Handlers:

package-level-event-handlers

Main Control Flow Level Task Event Handlers:

task-level-event-handlers

Error Handling

There is a Package Level Error Handler in the example above.  Basically, we need to capture the errors at the package and task level and log them to an Audit.PackageExecutionError table.  Note: Bad Data that is being redirected to an Audit.BadDataLog table is not going to raise an error.  The Framework needs to be able to handle both sorts of errors and report on either of them in the notification portion of the Framework.  The Bad Data data flow is shown in the diagram below.

data-flow-error-handlers

Notification/Routing

Depending on the Success, Failure/Error, Unusual Behavior, or Bad Data condition of the Package or Task, there are different people that need to be alerted.  The first pass of implementing the Routing will look at SMTP Email as the notification routing mechanism.  A later pass might look at HTTP or Web Service notifications.

All notification contact info will be in the Audit schema and will be maintainable through simple table updates.  Email addresses and notification recipients must not be hard coded into the packages, neither by a developer nor by BIML.  Notification recipients must be determined dynamically at run-time.

In the Success condition, an optional statistical email to the specified notification recipients would suffice.  This email would show the Success of the Package and include Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, and the Affected Row Counts.  Notification recipients should be configurable to receive notifications Daily, Weekly, Bi-weekly, Monthly or Not at All.

In the Failure/Error condition, a detailed error message needs to be sent to the notification recipients that are indicated as SendErrorNotice in their contact profile.  Every package must name at least one SendErrorNotice contact.  These contacts will receive an email that shows that there was an error, the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, Error Message(s), Stack Trace (if available), Bad Data Condition (if there is one), and as much other information as we can harvest from SSIS.  The difference between a Failure and an Error is that a Failure is expected in certain circumstances, but an Error is not expected.

In the Unusual Behavior condition, a message needs to be sent to the SendErrorNotice recipients, as in the Failure condition above.  However, the Unusual Behavior notification will document the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, and Bad Data Condition (if there is one), as well as documenting the Unusual Behavior that triggered this condition.  For example, a package normally processes 100,000 rows, if it suddenly processes 500,000 rows, this would be Unusual Behavior.  Likewise, for a package that typically takes 3 minutes to run, to take 20 minutes to run would be Unusual Behavior.

In the Bad Data condition, the package has not Failed, Errored, or experienced Unusual Behavior.  It has experienced rows being routed to the Bad Data path(s) in the Data Flow.  Details of the Bad Data need to be sent to the notification recipients that are indicated as SendBadDataNotice in their contact profile.  These notifications should include the Package Name, Start Time, End Time and Elapsed Time, Data Source, Target DB, Schema, and Table Name, Affected Row Counts, Bad Data Row Counts, and Bad Data Row Details.

Multi-threading

Multi-threading allows the package to run faster when there are tasks that can be run in parallel.  There are two multi-threading conditions that the Framework needs to handle:

  • Master Package Multi-threading of multiple child packages
  • Package Task execution within a package

Re-run ability

Packages should be re-runnable if the data has changed and we need to reload it.  For example, bad data was detected, logged and handled.  Now the bad data has been fixed, we need to re-run the package for today’s data.  This implies that the package would know that its last run had bad data, that the package actions need to be reversed and then the package can reload the data for the day.  This is most often implemented as a clean-up step at the beginning of the ETL package that makes sure that none of the data it’s about to put into the database is already there and removing it if it is.  For example, the package was loading data into the final tables when the database ran out of space and threw an error.  Once the space issue is fixed, the package would be re-run.  First, it would remove the data it loaded in the first run, then it would load the data normally as if it had not been run before.

Restartability

Packages should be able to pick up from the task prior to the failure point and run properly (where this makes sense).  There’s no need to stage data we’ve already staged.  Typically, when a package fails, it has already done some processing.  Packages should be able to roll back the failed task and restart from that point.  This is an advanced ETL framework activity.  It can be very tricky to deal with restarting a failed package, especially if it has any complexity in it.

My First Foray into Hadoop

So I have a big dataset (1.7 billion rows) that I want to analyze.  I figured, “Hey, Hadoop is all over this Big Data thing, I wonder if I can do a Proof of Concept?”

Compiling Hadoop on Windows (Ugh!)

So, first, I tried to follow some instructions on how to get the Hadoop source into Windows and compile it.  It turns out that Hadoop is Jave based and most Hadoop programmers are Java programmers.  So a lot of the instructions are in Java.  And, good for me, the build engine is Maven, which I happen to know quite a bit about thanks to the weeks at CompanionCabinet where I automated the build using Maven.

However, it turned out the Ant was having a problem with running the SH command and after several tries, I went googling for an already compiled version of the Hadoop project.  Low and behold, I found one on GitHub:  https://github.com/karthikj1/Hadoop-2.7.1-Windows-64-binaries.  In the middle of the top area of the page is a “1 Release” link.  Click there to download the binary:

Hadoop Binary

Installing all the bits

Based on the wiki article here:  http://wiki.apache.org/hadoop/Hadoop2OnWindows.

I found the link to this:  Building.txt

Near the bottom of that file, are some incomplete instructions on what to download, install and do to compile your own version of Hadoop in Windows.

So I downloaded all these:

  1. Java Developers Kit (JDK) 1.7.0_80, includes Java Runtime Environment (JRE) 7.
    JDK Download
  2. Maven 3.3.9.
  3. Cygwin 64.
  4. CMake 3.5.2.
  5. zlib 128.
  6. protobuf 2.5.0.
  7. Windows 7 SDK.

Then I installed or unzipped the files.

  1. JDK 1.7 is an install.  I let it install to Program Files\Java.
  2. I copied the Maven file to the Java folder and unzipped it to a new folder (apache-maven-3.3.9).
  3. I installed Cygwin to the Program Files\Java\Cygwin folder.
  4. I installed CMake and accepted the defaults.
  5. I unzipped the zlib 128 files to Program Files\Java\zlib128-dll.
  6. I unzipped the protobuf files to Program Files\Java\protobuf-2.5.0.
  7. I tried to install the Windows 7 SDK but it had issues, which I ignored and proceeded on since I wasn’t going to compile my own Hadoop after all.
  8. I unzipped the Hadoop files to \hadoop-2.7.1.

Then I did the following steps:

  1. JAVA_HOME must be set, and the path must not contain spaces. If the full path would contain spaces, then use the Windows short path instead.  In my case, this was:
    set JAVA_HOME=C:\Progra~1\Java\jdk1.7.0_80\
  2. I created a C:\tmp folder because I didn’t have one and, by convention, Hadoop uses it.
  3. I added the ZLIB_HOME environment variable and pointed it to C:\Program Files\Java\zlib128-dll\include.
  4. I added several items to the PATH variable:  C:\Program Files\Java\apache-maven-3.3.9\bin;C:\Program Files (x86)\CMake\bin;C:\Program Files\Java\zlib128-dll;C:\Program Files\Java\Cygwin\bin

With all that in place, I was ready to start Hadoop.

Starting Hadoop

Apparently I have to configure several files in the Hadoop\etc\configure folder first.

Section 3 on the wiki page describes in detail how to change the configuration files.

I combined that information with the steps found on this article to create the file system, create a directory and put my txt file there.

What’s Next?

I am not sure what’s next.  Looks like I have some learning to do.

This article gives a nice technical overview of Hadoop.

And then I discovered Hortonworks.  Hortonworks Sandbox is an open-source VM with Hadoop and a bunch of tools already fully configured.  So I downloaded this onto a different machine and am trying it out right now.  I’m going to try the VirtualBox VM.  I used VMWare Player and VirtualBox some time ago and found VirtualBox a lot easier to work with.  It looks the Hortonworks HDP Sandbox is going to take a while to download.  See you again on Monday.

In the meantime, I’m going to check out this tutorial on edureka.

 

Orks with Dreads? Why not?

A quick and easy way to make dreadlocks for your orks, or for anyone else for that matter, is to use your Hobby Drill in the thickest part of an old sprue.  Twirling the hobby drill carefully, you can get a ½ to ¾ inch long dreadlock (or longer I suppose, if the sprue is thick enough).

Superglue the dreadlocks to your orks’ heads with Superglue Thin.  Do not use plastic glue as it will dissolve the dreadlocks you just worked so hard to make.

And voila, your bald orks now have dreads.

Enjoy your hobby!

Current bought to painted ratio for 2016:  ∞:5