Open Letter to Job Seekers in Information Technology

Dear Job Seeker,

So you’re looking for a new job in I.T.?

First, how many users group and software groups do you visit each month? These are a GREAT way to meet potential co-workers and potential employers. To find them, search on for “software”, and, separately, “users groups”, and also “developer”.

There are tons of virtual events that are free that you can participate in.  Find your local vendor events.  For example, Microsoft, Oracle,, and  Speak to local recruiters and find out what events they will be attending.  I’ve had good luck finding jobs through TEKsystems, but there are hundreds of recruiters looking for you! Make sure they can find you.

Second, if you recently graduated, you are probably used to giving presentations.  Even if you didn’t, you should consider speaking at (virtual) events for the users’ groups or the development community.   Develop a 40-50-minute presentation that is code-heavy instead of PowerPoint heavy that shows an interesting thing.  Join a local Toastmasters club to hone your public speaking skills.

If you’ve done a challenging project or run into a thorny issue that was especially difficult to solve, those make great topics.  If you know Microsoft SQL or Azure, you should consider speaking at SQL Saturday events.  For example, I occasionally speak at SQL Saturday and for one of my sessions, I speak about Why Microsoft Developers need to learn Python because of my love for using Python to solve utility-type problems.  

A lot of firms are “not hiring” right now, but they will hire a candidate who is outstanding.  How do they find out you’re outstanding?  They see you at an event.  Use those events to ask questions, speak up, speak out, network, and meet people.

Third, learn a new skill.  Do you have a particular area of interest yet?  For example, I move data.  I am an SQL Expert.  I write C# and Python.  But I also create Alexa skills.  I’m not a web/HTML/CSS/JavaScript full stack developer.  I know this and so it lets me focus on backend technologies.  I’m also a Microsoft guy.  I don’t write Java.  Period.  And I don’t prefer to work with Oracle or DB2.  I’m (mostly) a SQL Server and Azure developer.  I’m also dipping my toe into the AWS pool.  Careers in I.T. require a commitment to life-long learning.  Some of that time is spent deepening and sharpening your skills in certain areas, but some of it should be spent learning new languages, new techniques, development patterns, unit testing tools, agile methods, and more.

Fourth, consider teaching.  Do you have Python skills?  Can you learn quickly?  Are you quick-witted and “fast on your feet”?  If so, practice teaching.  Put together a 5-minute video of you teaching something, anything technical, and put it on YouTube.  Then add the link to your resume and LinkedIn profile.  Check out Doodly for an alternative way to make training videos.

Fifth, start a technical blog.  As you learn a new skill, you will explore technologies and will probably have to “figure stuff out”.  When you do, blog about it.  I still get hits on one of the technical articles that I wrote years ago about a weird problem with Maven.

Finally, specialize.  Figure out what it is that gives you energy, jazzes you, or otherwise gets you motivated.  The aspect of computing that is so awesome for you that time passes and you don’t notice.  Focus on the languages and skills where time flies.  You don’t have to be great at it.  You can learn that.  But you can’t learn passion, you have to find it.  Whatever that thing is that inspires you, that is your strength.  For example, I always aspired to move up the corporate ladder, but that’s not my strength.  I am so much happier solving a complex computing problem.  I eat them up.  And that is my strength.  That and teaching.  I really enjoy sharing my skills with others so they don’t have to figure it out the hard way.

Well, that should be plenty for a start.

Good luck in your job search,
Simon Kingaby

P.S.  LinkedIn.  Make sure your LinkedIn profile is AWESOME.  It is the best way to get the attention of recruiters.  They spend hours googling and searching for people with specific skills.  Make sure they can find you for your specific skills.

When Loading Data, Should I Drop Indexes or Not?

I just ran a few simple tests in Azure SQL DB to see how each would perform.

I have a target table with an identity column that is the clustered primary key, and two other indexes that are the same except for the field order. (Whether having both is useful is a question for another day.) Here’s the DDL for the target table:

	Field1 [numeric](12, 0) NULL,
	Field2 [smallint] NULL,
	Field3 [int] NULL,
	Field4 [smallint] NULL,
	Field5 [nvarchar](1) NULL,
	Field6 [nvarchar](2) NULL,
	Field7 [numeric](4, 0) NULL,
	Field8 [numeric](2, 0) NULL,
	Field9 [nvarchar](2) NULL,
	Field10 [nvarchar](8) NULL,
	Field11 [datetime2](7) NULL,
	Field12 [nvarchar](8) NULL,
	Field13 [datetime2](7) NULL,
	[UpdateType] [nchar](1) NOT NULL,
	[RowCreated] [datetime2](7) NOT NULL,


Test 1: Truncate the Target, Drop the Indexes, Insert the Data, Recreate the Indexes.

Test 2: Drop the Indexes, Truncate the Target, Insert the Data, Recreate the Indexes

Test 3: Just Truncate the Target and Insert the Data

Test 4: Truncate the Target, Drop the non-clustered Indexes (leaving the the clustered index on the identity column), Insert the Data, Recreate the non-clustered Indexes.

Here are the results. All timings are in milliseconds. These were run on a PRS1 instance of Azure SQL Database.

Test 1:
Trunc then
Drop Idxs
Test 2:
Drop before
Test 3:
No Drop/
Test 4:
Trunc but don’t
drop clustered
Truncate4 2 04
Drop PK8 4  n/a  n/a 
Drop Index 15 23,630 n/a 2
Drop Index 26 2  n/a 2
Insert 1.84 M rows83,033 82,315 161,706 83,205
Create PK20,454 21,205  n/a  n/a 
Create Index 112,149 12,264  n/a 12,265
Create Index 211,142 11,313  n/a 11,247
Total Time (ms)126,801 150,735 161,706 106,725
Total Time (mins)2.11 2.51 2.70 1.78
Delta (ms)0 23,934 34,905 (20,076)

Test 4 was the clear winner as it avoided the cost of recreating the clustered index. Which makes sense as the clustered index was being filled in order by the identity column as rows were added. Test 1 came in second, so if your clustered index is not on an identity column, or you have no clustered index, you are still better off dropping and recreating the indexes.

Conclusion: When inserting larger data sets into an empty table, drop the indexes before inserting the data, unless the index is clustered on an identity column.

A New Adventure…

I’ve added a few new acronyms to my life this past week:  BMI, BNA, and TN.

I’ve accepted a position at Broadcast Music Inc. (BMI), which is in Nashville (BNA) Tennessee (TN).  This will require a move from Charlotte to Nashville.  Something we are going to have to work on over the next few weeks.

I will be a Senior ETL Developer at BMI and I am very excited about this new position.  They have a ton of ETL packages to move to SSIS and the Microsoft stack.  It will almost certainly involve BIML because of the sheer number of packages and data feeds.  It sounds awesome.

So this One Man is off to mow a new meadow once again.  Say a prayer for me and wish me luck.  Thanks!

The 3 Things a Project Needs to Accomplish – Highest Level Requirements Document for an Agile Project

Often, in an agile software development project, there can be a “fly by the seat of your pants” feel to the development effort as stories are created, prioritized and scheduled into Sprints without a formal Requirements Phase or a 10 pound Requirements Book – especially if you are working in an almost-Agile or mostly-Agilish environment.  One way to handle the lack of a Requirements Book, is to create a “3 Things My Agile Project Needs to Accomplish” document.

The “3 Things My Agile Project Needs to Accomplish” document is lightweight, and should typically be a single page.  It simply lists the 3-5 things your project needs to accomplish, with 3-5 bulleted items below each of the things breaking the “Thing” down into high level, one sentence descriptions of functionality that will accomplish the “Thing”.

Here’s an example template:

My Agile Project

Project Purpose in 1-2 sentences.

1.    First (Most Important) Thing the Project Needs to Accomplish

  • High level functionality that will accomplish the first thing
  • More High level Functionality
  • More High level Functionality
  • Perhaps More High level Functionality
  • Perhaps More High level Functionality (Max 5 items. If you need more, then you probably need another thing to accomplish or you aren’t thinking big enough.)

2.    Second Thing the Project Needs to Accomplish

  • High level functionality that will accomplish the second thing
  • More High level Functionality
  • More High level Functionality

3.    Third (Least Important) Thing the Project Needs to Accomplish

  • High level functionality that will accomplish the second thing
  • More High level Functionality
  • More High level Functionality



You may need up to 5 things the project needs to accomplish.  Any more than that and either, you aren’t thinking big enough, or the project has a very broad scope and should likely be broken down into smaller projects.

The bulleted items represent business objectives.  At this level, you should definitely be thinking in terms of “What does the solution need to do to accomplish one of the 3 things?”  “What?” implies using business objectives and business nomenclature.  We should not be concerned with “How the solution needs to work or how the solution needs to be built.”  Also, the bulleted items are not User Stories, or even Features, in and of themselves.  They are bullets of business functionality that describe “What?” is needed.

On index cards or in an ALM tool like MS TFS, each bulleted item can be translated into one or more (usually not more than 3-5) Features.  (If you need more than 5 features, you should probably go back to the 3 things and add another bullet, or even add an additional thing.)  These features should be written using business nomenclature and are still “What needs to be done?”, rather than “How?”

Each Feature, can then be elaborated into User Stories and Test Cases.  The process of making this happen usually occurs in a meeting with the developers, testers, SME’s and project/team leader (e.g. ScrumMaster or PM).

Traditionally, the User Story should be in the format:

Casual:  As a <type of user>, I want <some goal> so that <some reason>.

More Formal:  As a    Concerned Party   , I want    goal or business or technical feature or function (remember What, not How)   , so that    reason/business purpose/justification   . 

Using notecards, the User Story is written on the front of the card, and 3-5 Test Cases that will prove that the feature or function works are on the back.  The combination of the User Story and Test Cases should be sufficient to qualify as “Requirements”.  It should be possible for the team to elaborate the Tasks needed and estimate the Story Points (for the Stories) and Hours (for the Tasks).

For example, here is a User Story:

As an Analyst, I want a Widget Month Pricing table that includes one record for each Widget for each month that that Widget was on sale, including a blank Market Price field, so that I can store the Market Price generated by the Pricing Algorithm for each Widget/Month combination.

And here are some Test Cases:

  • Does the table show only relevant records based on Month and Widget?
  • Can the Pricing Algorithm, and only the Pricing Algorithm, write to the Market Price field?
  • Does the table show the right Widget/Month records for all Widgets?
  • Does the hourly refresh of the table run in under 90 seconds?

The User Story and Test Cases above give me the info I need to be able to estimate the story (in Story Points), task it out and start working on it.

A note about the Test Cases and Non-functional Requirements (NFR’s).  NFR’s, like security requirements and performance requirements, are not well suited to User Stories.  But they work very well as Test Cases.  As shown above, the User Story made no mention of the hourly refresh, but this requirement came out in the discussion of the story and was captured as a Test Case.  Also note the inclusion of the performance requirement for the hourly refresh.  Both the assumption that there would be an hourly refresh and the performance requirement that it not take more than 90 seconds are NFR’s that should be captured while the Story is being elaborated.

Visual Studio 2012, 2013, 2015, 2017 Version Numbers and Updates

Which Version of Visual Studio do I Have?

Because I couldn’t find it on the web, here’s a list of the Visual Studio 2013 Version numbers for each of the Updates:

VS 2012 Version Version ID
Shell – RTM 11.0.50727.1
Update 5 11.0.61219.00
VS 2013 Version Version ID
RTM 12.0.21005.1
Update 1 12.0.30110.0
Update 2 12.0.30324.0
Update 3 12.0.30723.0
Update 4 12.0.31101.0
Update 5 12.0.40629.0
VS 2015 Version Version ID
RTM 14.0.23107.0
Update 1 14.0.24720.00
Update 2 14.0.25123.00
Update 3 14.0.25420.10

For VS2017 versions, check out this link:  VS 2017 Release Notes

For VS2019 versions, check out this link:  VS 2019 Release Notes

New BI SharePoint Server Showed Up

I love that no one told me the new BI Server for SharePoint was up.  Grrr…..

And, lo and behold, despite the requisition stating clearly that it needed Windows 2012 on it, I still got Windows 2008.  Aahhh!

I fixed that with an in-place upgrade to Windows 2012 R2.   Hopefully, that won’t bugger it up too badly.

Now I get to install SharePoint and all the BI services.  Fun, fun, fun.

But wait…  First I have to install Windows 2012 Update.  Which turns out to be a handful of KB files.  Before you can install the Update, you have to install KB2919442.  Then you can install the Update.  After several reboots, the update is finally installed.

Next I ran SharePoint setup, just to see what would happen.   This happened:

SharePoint Pre-requisites

So I guess I know what I’ll be doing next.

Purpose and Principles of the Data Layer

At its core, the Data Layer’s main purpose is:

Data Layer’s Main Purpose:

To abstract all interactions with the database so that business objects can be written to deal with business rules, not with database interaction.

For example, when promoting a standard Deal, the business logic comprises:

  1. validate that the Deal is in a promotable state
  2. promote the Deal
  3. generate the related Deal Transactions
  4. generate the related Confirmation

In addition to these items, there are interactions with the Deal Pricing, Deal Costing, Deal Transaction Pricing and a variety of other lookup tables. Records need to be saved, retrieved, updated and deleted to facilitate the persistence of the Deal’s new “Promoted” state. The Data Layer will encapsulate the code that interacts with the database, so the Deal Promoter class only needs to be concerned with the business rules of promoting a deal, and not with the mechanics of persisting those changes to the database.

ORM Principles

In order to effectively serve as an Object-Relational Mapping (ORM) tool, the Data Layer needs to implement the following principles.

Principle 1: The Data Layer should generate the code necessary to deal with different tables and should provide a common API for working with the resulting data objects.

A typical database interaction involves the following steps:

  1. Get the connection string
  2. Open the connection
  3. Create the command object
  4. Execute the command
  5. Close the connection

The only thing that changes from table to table is the names of the table and its columns. All of the database code is identical.

Principle 2: The Data Layer should be able to work with the entire Object Graph by saving and retrieving related objects as a set.

Some objects are more complex than others, for example, a Deal has Pricing, Charges and Transactions that are an integral part of it. It also has Confirmations that are related to it. When saving a Deal, the pieces and parts of the Deal should get saved too.

Principle 3: The Data Layer should handle failures within the context of a transaction and roll back the changes to a consistent, stable state.

Sometimes, when saving a complex object, an error may occur in one of the pieces. In this case, the Data Layer should gracefully handle the error and leave the object and the database in a stable, consistent state.

Principle 4: The Data Layer should intelligently map database tables to appropriate Business Objects.

In several cases, a business object will represent a concept differently than the database might. For example, the database table EMPLOYEE contains all the records for the Employees, Managers and Direct Reports business classes.

Principle 5: The Data Layer should handle concurrency properly.

When objects are saved to the database, concurrency problems arise because the data being saved is about to overwrite data that has already changed since it was last retrieved. Concurrency resolutions include: Overwrite, Merge and Discard. The Data Layer needs to support these options and allow developers to choose which resolution to employ.

Query Support

Principle 6: The Data Layer should support LINQ.

In order to provide data sources for drop downs and grids the Data Layer needs to be able to support querying, including sorting, grouping and summarizing. There are three choices to do this:

1) Oracle native SQL queries

Implementing this technique often pushes Business and UI logic all the way back into the database. It makes ORM a challenge as the query objects are not like table-based as they are not updateable and do not usually have the necessary key fields.

2) Custom querying support in the Data Layer

Implementing this technique is complex, non-standard and may have performance issues.

3) LINQ (The .Net framework’s built in query language)

Implementing LINQ provides powerful, sophisticated, query capabilities. LINQ to Entities also raises performance by taking advantage of the Entity Framework’s knowledge of the database objects.

Principle 7: The Data Layer should support asynchronous communication.

One of the worst aspects of application performance is the perceived lag while waiting for data to be retrieved from a database, transferred over the network and rendered in the UI. Asynchronous communication is the recommended way to prevent this lag by allowing the UI to be responsive while the data is retrieved, transferred, and even rendered, asynchronously. In Silverlight, all network communication is asynchronous, so the Data Layer must support asynchronous communication.

Oracle Support

Principle 8: The Data Layer should support Oracle specific features, such as Sequences, Packages and Oracle Data Types.

Most Oracle tables have a key field that is a numeric tied to a Sequence value. Much of the legacy code is embedded in the Database in Oracle Packages. There are also some Oracle specific Data Types (particularly LOB’s) that need to be translated to/from their .Net equivalents. The Data Layer needs to handle all three of these situations properly.

Trouble Shooting Support

Principle 9: The Data Layer should support granular logging for debugging and troubleshooting.

When debugging and troubleshooting, a detailed log of what is happening can be a very useful tool. Especially in asynchronous or Inversion of Control situations where the code cannot be easily stepped through, a log is critical to the discovery and elimination of bugs.

Performance Enhancement

Principle 10: The Data Layer should support server-side and client-side caching to improve performance.

Data Caching on the Server-side allows redundant calls for data from different clients to be served in a single database request. Data Caching on the Client-side allows redundant calls for data on the client to be served on the client without any network traffic at all.

Principle 11: The Data Layer should support validation at the client and at the server to improve performance.

Eliminating round trips by providing client-side validation will improve performance. Providing server-side validation will ensure data integrity at the server.

IdeaBlade DevForce – Model Setup Walk-through – Background

I know the world is about to change in April when VS2010 goes live, however, we are still using VS2008 and will be for at least a few more weeks.  In the hopes that this walk-through provides some insight to someone, even if it is just me, here goes.


We are  creating a Silverlight application for several reasons:

  1. Web development is not nearly complex enough, so we sought out a unique challenge involving the newest, least documented, voted most-likely-to-be-completely-re-done-in-the-next-version technology (excepting WWF, I mean, WF, which will probably hold that distinction for years to come).
  2. Our users do everything in Excel, meaning that much of the data entry UI for this app needs to be as Excel-ish as possible, including the dreaded Multi-Add (adding multiple rows between saves) and Multi-Edit (editing multiple rows between saves).  [FYI: It turns out that none(?) of the OTC Grids are actually designed for this.  You can make it work with Telerik if you want to try hard.  See postings like this one in Telerik’s forums for more info.  They have come a long way since that posting.]
  3. Management is opposed to fat client or virtualization, so the app has to be web-based.
  4. After much trial-and-error, and trial-and-failure, and trial-and-compromise, we deployed an ASP/Ajax solution that was less than satisfactory and not-at-all Excel-ish, so we had to find a “better way”.
  5. We have a code base that comprises 8 years of work in VB 6, classic ASP, ASP.Net 1, ASP.Net 2, ASP.Net 3.5, VB.Net (all sorts), Infragistics (an old version), Telerik (several versions), copious quantities of javascript, several generations of CSS and DHTML, including many of the really exotic first generation tricks, Ajaxified ASP pages, ASP/Ajax pages and an Oracle back-end, so we knew that ASP was not going to give us what we needed.
  6. None of us had ANY desire to explore Flex.
  7. We are a Microsoft shop, so building a Java app was not really an option.
  8. Silverlight 3 was coming and promised enough features for us to begin writing an Enterprise app.  (It turns out that we jumped the gun on this, but we were not alone and SL4 promises to remedy many of the “it’s-not-ready-for-the-enterprise” issues.)

We are using DevForce and the Entity Framework because:

  1. Having worked with Hibernate, NHibernate, and SubSonic (which I preferred over NHibernate), I was convinced that a Data Layer / ORM would make our application specific code much easier to write and would provide more and better infrastructure/plumbing than we ever could.
  2. Having been told:  “No Open Source”  and given 3 weeks to pull SubSonic out of a working project and replace it with a roll-my-own ORM, I knew that many features we needed (Concurrency and caching, to name just two) were going to be a HUGE effort to write myself, and that if this was a Make or Buy decision, that Buy was clearly the better choice.  (Check out Davy Brion’s Build Your Own Data Access Layer Series for a deeper examination of the Make option.)
  3. Having been exposed to DevForce Classic a few years ago, I knew their product provided much of what we needed, and they were on the cusp of releasing a Silverlight version of their WPF framework.  As an added bonus, their documentation, and Ward Bell’s blog, are highly readable and provide excellent project guidance and design philosophy.
  4. DevForce sits on top of Entity Framework, which does not support Oracle, but as this was about to kill the deal, we found that DevArt’s Oracle drivers were finally supporting EF properly.

We are using Prism and Unity because:

  1. If you’re gonna do this thing, you might as well go all in.
  2. Having worked with CAB for a Windows App, I understood the potential of a component based application framework — or at least imagined I understood it, can anyone really understand anything PnP publishes?
  3. After dabbling in the Java world for a few months, I had developed an appreciation for Spring, Dependency Injection, configuration over coding and convention over configuration.
  4. I am unrepentant about preferring Agile development practices, and many of the “best practices” prescribed by PnP and implemented in Prism, represent many Agile coding principles in action.  Both VersionOne and RallyDev have Agile 101 documentation on-line.   Mike Cohn has written several excellent books introducing teams to Agile practices.

So we ended up with the following technology stack:

  1. Prism 2 for its Modular framework
  2. Unity for Dependency Injection
  3. Microsoft Silverlight 3 for the UI
  4. Telerik for Silverlight for several of its UI components (alas, we still have to do far too much ourselves in this area though)
  5. IdeaBlade DevForce for Silverlight for its Silverlight friendly Business Entity model and asynchronous client-server communication layer
  6. DevArt dotConnect for Oracle drivers for its support of the MS Entity Framework and it’s excellent Entity Developer tool
  7. Microsoft Entity Framework for the server-side Entity Model and database connectivity

Seasons Change… Greener Pastures?

After spending several months figuring out how to automate the build of a Java app in using Maven, CruiseControl and myriad other tools, I took a job where I can return to working with Microsoft technology. However, little did I know, the greener grass wasn’t mowed and disguised the morass that is web development anyway. So, instead of learning Java, I have had to learn ASP.Net, Javascript and AJAX, Telerik RadControls for Ajax and more. It turns out that with ViewState, Session State and a Stateless environment, everything gets confused. Add in Server side, Client side, Postbacks, AsyncCallBacks and it’s a royal mess.

Having recently having my eyes opened to Open Source, the first area I pursued was the Data Layer. I started with NHibernate, but it turned out to be a bad fit. SubSonic was MUCH better, but it didn’t support Oracle very well. I did start making changes to the SubSonic source to get it to work for us, but then the corporate gods heard “Open Source” and decreed that SubSonic had to go. Nevermind that we had spent five months using it. I was given two weeks to come up with a new, homegrown, data layer. Well, two months later, we had our data layer. It reads the Oracle tables, views, functions, procs and packages and generates classes that can read/write/execute as appropriate. It turned out pretty well and is delightfully easy to use.

Next, up, the UI. The chosen toolset is Telerik RadControls for ASP.Net AJAX. (My previous experience is with Infragistics Windows toolset, so this was an interesting transition.) Users wanted the grid to be multi-edit. Teleriks grid does not support this natively, but it can be configured and coded to do it. But that is for another post.

Now, we are investigating MVC and Silverlight. I am hoping that my posts here will be of benefit to some as I explore these new technologies.