Visual Studio 2012, 2013, 2015, 2017 Version Numbers and Updates

Which Version of Visual Studio do I Have?

Because I couldn’t find it on the web, here’s a list of the Visual Studio 2013 Version numbers for each of the Updates:

VS 2012 Version Version ID
Shell – RTM 11.0.50727.1
Update 5 11.0.61219.00
VS 2013 Version Version ID
RTM 12.0.21005.1
Update 1 12.0.30110.0
Update 2 12.0.30324.0
Update 3 12.0.30723.0
Update 4 12.0.31101.0
Update 5 12.0.40629.0
VS 2015 Version Version ID
RTM 14.0.23107.0
Update 1 14.0.24720.00
Update 2 14.0.25123.00
Update 3 14.0.25420.10

For VS2017 versions, check out this link:  VS 2017 Release Notes

An Introduction to Estimating with Story Points

In the agile process, when we speak of estimating, we refer to Points (often called Story Points).  This article explains Points and how we use them in the estimating process.

First, let’s take a peek into a typical Project Manager (PM) – Developer conversation (maybe you’ve had a conversation like this yourself):

PM:  So, how long do you think it will take to finish this project?

Developer:  Oh, I don’t know, 2-3 weeks, maybe 4.

PM:  Well, how about this User Story; how long will that take?

Developer:  I’m not sure, but I would guess I can get that done in a couple of days, maybe three, or four.

Some things to note about this conversation:  In the first estimate, the Developer gives a range of 100% (2 weeks) to 150% (3 weeks) and then extends it to 200%.  What do you think the PM heard?  Let’s be kind, maybe he heard six weeks, which extends it to 300%.   A similar thing happens with the second estimate, but it’s smaller in scope so this time the range is from 100% (3 days) to 133% (4 days) to 167% when the PM adds a one day buffer to that estimate as well.

Clearly, there is something wrong with the estimating abilities shown in this scenario.  Note that this is not necessarily a reflection of the estimator’s inability to estimate well, but rather of the situation.

What’s wrong is that no one trusts the estimates because they are invariably wrong.

What if your estimates could be reliable and useful?  What factors would we consider?

First, there is the size of the problem in a perfect world; this is typically where the “2-3 weeks” estimate comes from.  Second, there’s the “unless something goes wrong”, which caused the developer to say, “…maybe 4.”  What goes into the “unless something goes wrong” check?  It is a consideration of risk and complexity and prior knowledge.  Let me explain, if the User Story under consideration has a high degree of risk, or is quite complex to implement, or if the developer doesn’t yet know how to use the technology that this User Story asks for, then the estimate goes up and becomes more approximate.  Finally, there’s the elapsed time: a simple table change may take the developer 20 minutes to design and document, but then it takes a 4-8 hour turnaround for the DBA to make that change in the Development database, so clearly this will increase the estimate too.  The same is true for testing:  Sometimes a simple enough fix causes all sorts of retesting.

So, instead of measuring estimates in hours, we need to measure estimates that consider:  size,  risk, complexity, prior knowledge and other factors such as elapsed time and testing impact.  For the lack of anything better to call it, let’s pick an arbitrary measure for this estimate and call it a Point.

Now, we can look at one of the stories we need to do, and say, arbitrarily, “This User Story is worth 1 point.”  Now, let’s estimate the other stories based on that first User Story.  Perhaps the second User Story is not as well defined and will require some developer research, it is also more complicated, so let’s say that User Story is worth, 5 points?

If we do this with just any numbers, then we run into a special kind of madness where we are splitting hairs over the difference between a 12 point apple and a 13 point orange.  Not only are the problems dissimilar, but since the 1 point item was arbitrarily chosen, the difference between a 12 and a 13 is quite nebulous.  For this reason, most agile teams use only specific numbers when estimating points.  These are typically:  0, 1, 2, 3, 5, 8, 13, and 20.  (This sequence is based on the Fibonacci sequence.)  Note that these are also integers, so there is nothing like 0.5 Points; the User Story is either 0 Points or 1 Point in size.

A lot of people don’t like the completely arbitrary selection of the value of a Point.  If instead, we say that a 1 Point User Story is roughly what one developer can accomplish in a standard day with no risk and easy complexity, then we have a basis for comparing the other numbers, a 2 is either twice as big, twice as hard, or twice as risky.  A 3 is probably some combination of these that make it likely to be 3 times bigger than a 1, and so on.

Further, a 0 is any User Story that’s a “gimme”.  If the developer says, “Oh, that’s no problem, I can have that fixed in 5 minutes,” that’s a 0 Point User Story.  Note, that ten 0 Point stories do not add up to 0 Points as ten little stories can easily take a day to accomplish.

Finally, there is 20.  Essentially, any User Story that is big enough to make you want to give it a 20, is too big to be estimated well, so a 20 is more of a red flag than a  legitimate points value, but this value allows you to estimate all of your stories, regardless of their size.

(Some teams include the integers 50 and 100 in their estimating.  This makes 20 just a big story, 50 is a story big enough to fill a sprint and 100 is the red flag that the story is too big to estimate.  I prefer to max out at 20 points.

In Summary:

Points are an estimate of size, risk, complexity and other factors such as prior knowledge, elapsed time, and testing considerations.

Points are typically an integer in the sequence:  0, 1, 2, 3, 5, 8, 13, and 20.

Points allow us to estimate the relative size of each story.

Enabling the Unknown Member on a Dimension in SSAS.

Do you ever get this error:

  • Warning 5 Errors in the OLAP storage engine: The attribute key cannot be found when processing: Table: ”, Column: ”, Value: ’12:00:00 AM’. The attribute is ‘Date’.

It turns out that this error occurs because:

  • Null values in the fact table relationship are converted to ’12:00 A.M.’

So I tried enabling the Unknown Member attribute of the Date table.  (It’s a standard Date table generated by the SSAS wizard for such things.)

No luck.  Still getting the error when the date in the Fact table is null.

It turns out that enabling the Unknown Member is a multi-step process.   (Much thanks to wildh for their post here.)

To enable the Unknown Member on a Dimension follow these steps:

  1. Go to the Properties Page of the Date Dimension you are using, and select UnknownMember visible AND set the UnknownMemberName to something, such as ‘Unknown’:
    Enable the Unknown Member in the Dimension
    Enable the Unknown Member in the Dimension
  2. Now, go to the Cube Definition and click on the Dimension Usage tab:

    Dimension Usage Tab with Date dimension Usage highlighted.
    Dimension Usage Tab with Date dimension Usage highlighted.
  3. Next, click on the button next to the Date dimension usage, and then click the advanced button in the bottom right:

    Select the Advanced button
    Select the Advanced button
  4. Finally, change the relationship Null Processing drop down to Unknown Member:

    Select the UnkownMember option in the Null Processing dropdown
    Select the UnkownMember option in the Null Processing dropdown
  5. Click OK. To get back to the Dimension Usage tab, and change the Null Processing dropdown for any other Date relationship where the date in the Fact table could be null.

Removing Windows.old folder in Windows 2012 R2

It turns out that all the tips for Windows 7 and 8 rely on assumptions that are not valid in Windows Server 2012.

See here for some good Win 8 tips:

Removing Windows.old from Windows 8

For Windows Server 2012, I eventually figured out this kludge to get rid of the 16GB Windows.old folder:

  1. Take a backup of the OS Drive on your Server (Usually C:).
  2. As an Administrator on the local machine, or as a Domain Administrator, right-click on the Windows.old folder and choose Properties.
  3. Choose the Security tab.
  4. Choose the Advanced… button
  5. Beside Owner at the top of the dialog, click Change.
  6. Enter your own User Id.  Check Names.  Ok.
  7. Now check the box that appeared for Replace owner on subcontainers and objects.
  8. Click Apply.
  9. When it’s done, Cancel out of the Advanced Security Settings and Windows.old Properties dialogs.
  10. Then, reopen the Properties and select Security and select the Advanced… button.
  11. Now hit the Change Permissions button.
  12. Now click the Add button.
  13. At the top, click Select a principal and put your own User Id in.  Check Names.  Ok.
  14. Under Basic Permissions, click Full Control.  Click Ok.
  15. Back on the Advanced Security Settings dialog, in the lower left check the Replace all child object permissions with inheritable permission entries from this object.
  16. Click Apply.  Click Yes when it asks you to confirm.
  17. Click Ok when it’s done.  Close the Properties dialog.
  18. Now, close any other windows you happen to have opened in your frustrated attempts to get rid of Windows.old (I had many).
  19. In Explorer, right-click the Windows.old folder.  Hold down Shift and select Delete.  (Shift will skip the Recycle Bin and just delete the folder and it’s contents).  Some time later, the Windows.old folder “should” be gone.  I did this on two servers and on one it disappeared, on the other two files were in use somehow.  Even after a reboot, those two files are still in use.  Weird.  They only take up 8K, so I’m going to ignore them.  One day, I might go after them again, but I doubt it.
  20. It’s probably a good idea to force a reboot after deleting the Windows.old folder to make sure you didn’t just hose your server.  If you did, then you have a backup to restore from, right?

New BI SharePoint Server Showed Up

I love that no one told me the new BI Server for SharePoint was up.  Grrr…..

And, lo and behold, despite the requisition stating clearly that it needed Windows 2012 on it, I still got Windows 2008.  Aahhh!

I fixed that with an in-place upgrade to Windows 2012 R2.   Hopefully, that won’t bugger it up too badly.

Now I get to install SharePoint and all the BI services.  Fun, fun, fun.

But wait…  First I have to install Windows 2012 Update.  Which turns out to be a handful of KB files.  Before you can install the Update, you have to install KB2919442.  Then you can install the Update.  After several reboots, the update is finally installed.

Next I ran SharePoint setup, just to see what would happen.   This happened:

SharePoint Pre-requisites

So I guess I know what I’ll be doing next.

SQL Server 2014 Installation

Last week I upgraded our Development box to SQL Server 2014.  Since this was a clean Dev box I had the choice of upgrading from SQL Server 2012 or doing a remove-install.  I figured that rather than leave all the detritus of the prior installs and versions of Data Tools, I would do a remove-install.

Well, uninstalling SQL Server 2012 turned out to be more than just clicking Uninstall in the Control Panel | Programs and Features section.  After trying as many Uninstalls as I could, I ended up using CCleaner to uninstall/remove a lot of the remaining bits and pieces.

With that done, I manually deleted the SQL Server folders from Program Files and anywhere else I could find them.

Then I used CCleaner again to clean the Registry of any SQL Server (now) invalid registrations.

I rebooted a few times in there too.

Finally, I had scrubbed away as much of SQL Server 8/9/10/11 as I could.

The install for 2014 went smoothly after that:

Folders Created by SQL Server 2014 Install
Folders Created by SQL Server 2014 Install

Guess what?   Half the stuff I had removed, was reinstalled by SQL Server 2014.  It installs components from a bunch of SQL Server versions:

Components Installed by SQL Server 2014
Components Installed by SQL Server 2014

So, now I have the SQL Server machine configured, I am waiting for the SharePoint server machine to be delivered.  Then we will be off to the races.

BI Data Source – SSAS

Well now.  It turns out that PerformancePoint for SharePoint has quite a few nifty little features (after reading the help topics on the blank PerformancePoint site), but to work with it properly I need to have some data in a SQL Server Analysis Services (SSAS) Cube.  Since we are still in the stone ages and running on SQL Server 2008 R2 (hey, at least we moved to R2 last year), I will need to get some reasonably slice-and-diceable data in there at first.  I have an excellent sample dataset comprising the last 2 years of Head Count data (i.e. number of people who work here).  This can be sliced by State, District, Top level Manager, HQ or Not, and Location.

So, before I can start dashboarding, I have to do some ETL to get our sooooo not-in-star-schema raw data cleaned and into the Cube.  Since our data lives in Oracle, I will  shunt it to a SQL Server database first.  For this, I will use the Attunity drivers.  If you haven’t heard of these, and you work with SSIS and Oracle (or Teradata), you should look into them.  They’re from Microsoft, they’re free and they work in 2008 and 2012.  They are 10-50 times faster than Oracle’s drivers and Microsoft’s Oracle drivers.

Meanwhile, I installed the Dundas Dashboard software on my local machine so that I could check it out.  The install went fine, it created a SQL Server instance for itself and setup the tools as expected.  There are a ton of additional features (most of which I installed) and add-ons (none of which I installed – yet).

I also installed the free MicroStrategy Analytics Desktop.  Turns out this is a web app so it installed locally on port 8082.  Interesting.

So, once I’ve moved the Headcount Data into the Cube, I can try out the Dashboarding in PerformancePoint, Dundas and MicroStrategy to see what’s what.