Quickly Processing One Million Transactions in Azure SQL Database

I’ve had to solve an interesting problem this week. I started the week with an SSIS package that ran in 2 minutes. It extracts a million transaction records from DB2 to a file, uploads the file to Azure Blob Storage and BULK IMPORT’s the file into an Azure SQL Database staging table. All in 2 minutes. This was acceptable performance and faster than any of the other options we were considering.

Each of the DB2 records represents an Insert, Update or Delete to the base table in DB2. I get the Transaction Type (M – Merge (Insert/Update) or D – Delete), and a time stamp in addition to the columns from the row.

So a typical set of rows might look like this:

Id (Identity)Field1Field2Field3Field4TrxTypeRowCreatedTimeStamp
11394000112478577941M11:37:31.650
21394000122478577920M11:37:32.070
31394000132478577926M11:37:32.185
41394000122478577921M11:37:32.205
5139400013247857794M11:37:32.265
61394000122478577929D11:37:32.391
71394000122478577918M11:37:33.392

In the example above, the rows are all for the same document (See Field1:  value 13940001).  Rows with Field2= 1, 2, 3 were added.  Then row 2 and 3 were changed (Id 4, 5).  Then row 2 was deleted (Id 6) and a new row 2 was inserted (Id 7).

Here is the definition of the source table in the Azure SQL Database:

CREATE TABLE ETLSource(
       Field1 [numeric](12, 0) NULL,
       Field2 [smallint] NULL,
       Field3 [int] NULL,
       Field4 [smallint] NULL,
       Field5 [nvarchar](1) NULL,
       [UpdateType] [nchar](1) NOT NULL,
       [RowCreated] [datetime2](7) NOT NULL,
       [Id] BIGINT IDENTITY NOT NULL
) WITH (DATA_COMPRESSION = PAGE)
GO

CREATE UNIQUE CLUSTERED INDEX ETLSourcePK ON ETLSource (Id) WITH (DATA_COMPRESSION = PAGE);
CREATE UNIQUE NONCLUSTERED  INDEX ETLSourceIdx1 ON ETLSource (UpdateType, RowCreated, Field1, Field2) WITH (DATA_COMPRESSION = PAGE);
CREATE UNIQUE NONCLUSTERED  INDEX ETLSourceIdx2 ON ETLSource (Field1, Field2, UpdateType, RowCreated) WITH (DATA_COMPRESSION = PAGE);
GO

And here is the definition of the target table in the Azure SQL Database:

CREATE TABLE ETLTarget(
       Field1 [numeric](12, 0) NULL,
       Field2 [smallint] NULL,
       Field3 [int] NULL,
       Field4 [smallint] NULL,
       Field5 [nvarchar](1) NULL,
       [BatchDate] [datetime2](7) NULL
) WITH (DATA_COMPRESSION = PAGE)
GO

CREATE CLUSTERED INDEX ETLTargetPK ON ETLTarget (Field1, Field2) WITH (DATA_COMPRESSION = PAGE);
GO

At first, I tried a cursor. I know how to write them and it was easy enough to create a cursor that looped through the rows and used either a DELETE statement or a MERGE statement to deal with each one. Here’s what that looked like:

DECLARE @BatchDate DATETIME2(7) = SYSUTCDATETIME();

DECLARE @Field1 NUMERIC(12, 0)
DECLARE @Field2 SMALLINT
DECLARE @Field3 INT
DECLARE @Field4 SMALLINT
DECLARE @Field5 NVARCHAR(1)
DECLARE @UpdateType	CHAR(1)
DECLARE @RowCreated	DATETIME2(7)

DECLARE cur CURSOR LOCAL FAST_FORWARD FOR SELECT
	   Field1 
	  ,Field2 
	  ,Field3 
	  ,Field4 
	  ,Field5 
	  ,UpdateType
	  ,RowCreated
    FROM ETLSource
    ORDER BY id

OPEN cur

FETCH NEXT FROM cur INTO 
	 @Field1 
    , @Field2 
    , @Field3 
    , @Field4 
    , @Field5 
    , @UpdateType
    , @RowCreated

WHILE @@fetch_status = 0
BEGIN

    IF @UpdateType = 'D'
    BEGIN
	   DELETE FROM dbo.ETLTarget
	   WHERE Field1 = @Field1
		  AND Field2 = @Field2;
    END
    IF @UpdateType = 'M'
    BEGIN
	   --Merge the changes that are left
	   MERGE ETLTarget AS target 
	   USING (
		  VALUES(
		    @Field1 
		  , @Field2 
		  , @Field3 
		  , @Field4 
		  , @Field5 
		  )
	   ) AS source (
		Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5 )
	   ON (target.Field1 = source.Field1
		  AND target.Field2 = source.Field2)
	   WHEN MATCHED
		  THEN UPDATE
			 SET target.Field3 = source.Field3
			    ,target.Field4 = source.Field4
			    ,target.Field5 = source.Field5
			    ,target.BatchDate = @BatchDate
	   WHEN NOT MATCHED BY target
		  THEN INSERT (
			   Field1 
			 , Field2 
			 , Field3 
			 , Field4 
			 , Field5
			 , BatchDate)
		  VALUES (@Field1 
			 , @Field2 
			 , @Field3 
			 , @Field4 
			 , @Field5 
			 , @BatchDate);
    END;

    FETCH NEXT FROM cur INTO 
		@Field1 
	   , @Field2 
	   , @Field3 
	   , @Field4 
	   , @Field5 
	   , @UpdateType
	   , @RowCreated
END

CLOSE cur
DEALLOCATE cur

Unfortunately, this solution was TERRIBLY slow. Cursors are notorious for being slow. This one worked fine for 1,000 transaction rows, but, after running for an hour and only processing a small portion of the million rows, I killed it and went looking for a set-based alternative.

Next, I tried a set-based MERGE statement. This was problematic because it kept complaining that multiple source records were trying to change the same target record. This complaint made sense when I realized that a row might be inserted and updated in the same day so it would have two source transactions. So I needed to get rid of the extras. It turns out that I really only care about the latest change. If it’s an insert or update, MERGE will insert or update the target row appropriately, if it’s a delete, MERGE can handle that too. But, how to select only the most recent row for each key? The standard de-duplication CTE query served as a model. Here is the final statement that worked:

WITH sourceRows AS (
    SELECT *, RN  = ROW_NUMBER() OVER (PARTITION BY
	   Field1, Field2
	   ORDER BY Field1, Field2, RowCreated DESC)
    FROM ETLSourceStagingTable)

INSERT INTO ETLSource (
      Field1 
    , Field2 
    , Field3 
    , Field4 
    , Field5
    , UpdateType
    , RowCreated)
SELECT       
      Field1 
    , Field2 
    , Field3 
    , Field4 
    , Field5
    , UpdateType
    , RowCreated 
FROM sourceRows
WHERE RN = 1
ORDER BY RowCreated;

Note the introduction of a Staging Table. The SSIS package now uses BULK INSERT to load the Staging Table from the file in Blob Storage. The query above is used to load only the relevant rows (the most recent) into the ETLSource table. The Staging Table has the same structure as the ETLSource table, without the Id column. And has an index on it like this:

CREATE INDEX ETLSourceStagingTableSort ON ETLSourceStagingTable
(Field1, Field2, RowCreated DESC) WITH (DATA_COMPRESSION = PAGE)

The use of the Staging Table and the CTE query above meant that of my original 7 rows in the example above, only three are relevant:

Id (Identity)Field1SequenceField3Field4TrxTypeRowCreatedTimeStampRelevant
11394000112478577941M11:37:31.650YES
21394000122478577920M11:37:32.070
31394000132478577926M11:37:32.185
41394000122478577921M11:37:32.205
5139400013247857794M11:37:32.265YES
61394000122478577929D11:37:32.391
71394000122478577918M11:37:33.392YES

Now, I just needed to craft the MERGE statement properly to work. When I did, this is what I had:

MERGE ETLTarget AS target USING (
    SELECT 
	     Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , UpdateType
    FROM ETLSource
    ) AS source (Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , UpdateType)
ON (target.Field1 = source.Field1
    AND target.Field2 = source.Field2)
WHEN MATCHED AND source.UpdateType = 'M'
    THEN UPDATE
	   SET target.Field3 = source.Field3
		  ,target.Field4 = source.Field4
		  ,target.Field5 = source.Field5
		  ,target.BatchDate = @BatchDate
WHEN MATCHED AND source.UpdateType = 'D'
    THEN DELETE  
WHEN NOT MATCHED BY TARGET AND source.UpdateType = 'M'
    THEN INSERT (Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , BatchDate)
    VALUES (Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , @BatchDate);

Which was fine for a small data set, but crawled on a big one, so I added batching so the merge only had to deal with a small set of rows at once. Since the clustered PK is an identity column, and since I truncate ETLSource before loading it, I am guaranteed that the Id column will be values 1…n where n is the total number of rows. So, I initialize an @rows variable right after inserting the rows into ETLSource:

SET @rows = @@rowcount;

Next, I create a while loop for each batch:

DECLARE @batchSize INT = 10000;
DECLARE @start INT = 1;
DECLARE @end INT = @batchSize;

WHILE (@start < @rows)
BEGIN

    MERGE ETLTarget AS target USING (
    ...;

    SET @start = @start + @batchSize;
    SET @end = @end + @batchSize;
END

Then I add the @start and @end to the MERGE statement source:

MERGE ETLTarget AS target USING (
    SELECT 
	     Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , UpdateType
    FROM ETLSource
    WHERE id BETWEEN @start AND @end
    ) AS source

And this worked!!! I was able to process a million rows in 1 minute. Yay!

Then I tried 10 million rows. Ugh. Now the MERGE was only processing 10,000 rows per minute. Crap. What changed? Same code. Same data, just more of it. A look at the query plan explained it all. Here it is with 1 million rows:

And here it is with 10 million rows:

The ETLTarget table has 80 million rows in it. When I had 10 million rows in the ETLSource table, the query optimizer decided that it would be easier to do an Index SCAN instead of a SEEK. In this case, however, the SEEK would have been A LOT faster.

So how do we force it to use a Seek? It turns out the optimizer has no idea how many rows were processing in a batch, so it bases its optimization on the entire table. We could use the Loop Join hint a the end of the merge statement:

	MERGE	...	 , @BatchDate)
	   OPTION (LOOP JOIN);

But most folks like to avoid these kinds of hints. So we needed another way. Someone suggested putting in a TOP clause in the Source SELECT statement. That worked. Here’s how the MERGE looks now:

MERGE ETLTarget AS target USING (
    SELECT TOP 10000
	     Field1 
	   , Field2 
	   , Field3 
	   , Field4 
	   , Field5
	   , UpdateType
    FROM ETLSource
    WHERE id BETWEEN @start AND @end
    ORDER BY id
    ) AS source

With this in place, I was able to process the 10 million rows in 10 minutes. Woohoo! Just to be sure, I re-ran the process with a thousand, a million and 10 million rows and it was consistent. I was able to process a million rows a minute.

When Loading Data, Should I Drop Indexes or Not?

I just ran a few simple tests in Azure SQL DB to see how each would perform.

I have a target table with an identity column that is the clustered primary key, and two other indexes that are the same except for the field order. (Whether having both is useful is a question for another day.) Here’s the DDL for the target table:

CREATE TABLE [ETL1](
	Field1 [numeric](12, 0) NULL,
	Field2 [smallint] NULL,
	Field3 [int] NULL,
	Field4 [smallint] NULL,
	Field5 [nvarchar](1) NULL,
	Field6 [nvarchar](2) NULL,
	Field7 [numeric](4, 0) NULL,
	Field8 [numeric](2, 0) NULL,
	Field9 [nvarchar](2) NULL,
	Field10 [nvarchar](8) NULL,
	Field11 [datetime2](7) NULL,
	Field12 [nvarchar](8) NULL,
	Field13 [datetime2](7) NULL,
	[UpdateType] [nchar](1) NOT NULL,
	[RowCreated] [datetime2](7) NOT NULL,
	Id BIGINT IDENTITY NOT NULL
) ON [PRIMARY]
GO

CREATE UNIQUE CLUSTERED INDEX iscdpk ON ETL1 (Id) WITH (DATA_COMPRESSION = PAGE);
CREATE UNIQUE NONCLUSTERED  INDEX iscd1 ON ETL1 (UpdateType, RowCreated, Field1, Field2) WITH (DATA_COMPRESSION = PAGE);
CREATE UNIQUE NONCLUSTERED  INDEX iscd2 ON ETL1 (Field1, Field2, UpdateType, RowCreated) WITH (DATA_COMPRESSION = PAGE);
GO

Test 1: Truncate the Target, Drop the Indexes, Insert the Data, Recreate the Indexes.

Test 2: Drop the Indexes, Truncate the Target, Insert the Data, Recreate the Indexes

Test 3: Just Truncate the Target and Insert the Data

Test 4: Truncate the Target, Drop the non-clustered Indexes (leaving the the clustered index on the identity column), Insert the Data, Recreate the non-clustered Indexes.

Here are the results. All timings are in milliseconds. These were run on a PRS1 instance of Azure SQL Database.

Test 1:
Trunc then
Drop Idxs
Test 2:
Drop before
Trunc
Test 3:
No Drop/
Create
Test 4:
Trunc but don’t
drop clustered
index
Truncate4 2 04
Drop PK8 4  n/a  n/a 
Drop Index 15 23,630 n/a 2
Drop Index 26 2  n/a 2
Insert 1.84 M rows83,033 82,315 161,706 83,205
Create PK20,454 21,205  n/a  n/a 
Create Index 112,149 12,264  n/a 12,265
Create Index 211,142 11,313  n/a 11,247
Total Time (ms)126,801 150,735 161,706 106,725
Total Time (mins)2.11 2.51 2.70 1.78
Delta (ms)0 23,934 34,905 (20,076)

Test 4 was the clear winner as it avoided the cost of recreating the clustered index. Which makes sense as the clustered index was being filled in order by the identity column as rows were added. Test 1 came in second, so if your clustered index is not on an identity column, or you have no clustered index, you are still better off dropping and recreating the indexes.

Conclusion: When inserting larger data sets into an empty table, drop the indexes before inserting the data, unless the index is clustered on an identity column.

Azure Data Factory Publishing Error

I am using Azure Data Factory v2. It is bound to an Azure DevOps GIT repository. I made some changes in my branch, including deleting some obsolete items. Then I merged my changes back to the ‘master’ branch.

Now, I’m trying to publish my changes from the ‘master’ branch to the Azure Data Factory. When I do, it says it’s deployed 33 of 33 changes, and then it fails, saying:

“Publishing Error

The document cannot be deleted since it is referenced by <some Pipeline>.”

I searched high and low looking for some evidence that the offending Pipeline existed anywhere in the GIT repo. It did not.

Then, I discovered that you can change from the Azure DevOps GIT version of the Data Factory to the actual Data Factory version by selecting the latter from the dropdown in the top left corner of the Data Factory editor:

This saved the day. I was able to locate and delete the offending Pipeline(s) directly from the actual Data Factory. Then, when I switched back to Azure DevOps GIT mode, it allowed me to publish the changes from the ‘master’ branch. Woohoo!

Publishing An Alexa Skill

An Alexa Skill in Javascript – Part 6

We’ve done it!  We’ve built an Alexa skill!  Woohoo!  Now we need to publish our skill to the Alexa Skills Catalog and make it official.  Publishing your Alexa skill is the last step in the process of developing and deploying an Alexa Skill.

NOTE:  Pay attention to Amazon’s developer promotions.  You can get a free T-Shirt, Echo Dot, Echo Spot, and even the Echo Show if you publish your skill at the right time.

Prepping the skill.json File

There are a few things we can do to enhance the skill.json file so our skill is ready for publishing.

First, find two images that you can use to represent your skill.  These will appear in the catalog inside the speech bubble, like so:

The images should be png files in two sizes:  small (108 x 108 px) and large (512 x 512 px).  Create a new folder in your project called images and drop the two image files into it for safe-keeping.  We’ll add them to the distribution page later.

Next, we can add some keywords to our skill.json file to help people find our skill more easily, right after the description tag add a “keywords” tag:

                    "description": "Roll Some Dice will roll ...",
                    "keywords": [
                        "die", "dice", "rpg", "dice roller"
                    ]
                }
            },

The last, and perhaps most difficult, task is to create the testing instructions.

Tip:  Test your own testing instructions.  Make sure Alexa actually does what you expect her to do.

Based on our intent’s sample phrases, the slots, the synonyms, and the edge conditions, we have quite a few testing instructions.  Here are those sample phrases we put in the en-US skill model file:

"samples": [
    "roll a die {plusMinus} {modifier}",
    "roll {diceCount} dice {plusMinus} {modifier}",
    "roll {diceCount} dice",
    "roll a die"
]

To test our intent, we need to test the following:

Basic Phrases:

  1. Say “Alexa, start Roll Some Dice and roll a die.”  Alexa will generate a random number between 1 and 6 and say, for example, “You said roll 1 dice. I rolled a three.”
  2. Say “Alexa, start Roll Some Dice and roll 10 dice.”  Alexa will generate 10 random numbers between 1 and 6 and report the results, for example, “You said roll 10 dice. I rolled 3 ones, 4 threes, a four, a five and a six.”

With a modifier:

  1. Say “Alexa, start Roll Some Dice and roll a die plus one.”  Alexa will generate a random number between 1 and 6  and add 1, and say, for example, “You said roll 1 dice plus 1. I rolled a two.”
  2. Say “Alexa, start Roll Some Dice and roll a die minus one.” Alexa will generate a random number between 1 and 6  and subtract 1, and say, for example, “You said roll 1 dice minus 1. I rolled a three.”

With a modifier using synonyms:

  1. Say “Alexa, start Roll Some Dice and roll a die add one.”  Alexa will generate a random number between 1 and 6  and add 1, and say, for example, “You said roll 1 dice plus 1. I rolled a six.”
  2. Say “Alexa, start Roll Some Dice and roll a die subtract one.” Alexa will generate a random number between 1 and 6  and subtract 1, and say, for example, “You said roll 1 dice minus 1. I rolled a three.”

With a modifier, testing the edge/limits we put in:

  1. Say “Alexa, start Roll Some Dice and roll a die plus seven.”  Alexa will generate a random number between 1 and 6  and add 7, maxing out at 6, and say, for example, “You said roll 1 dice plus 7. I rolled a six.”
  2. Say “Alexa, start Roll Some Dice and roll a die minus seven.” Alexa will generate a random number between 1 and 6  and subtract 7, with a minimum value of 0, and say, for example, “You said roll 1 dice minus 1. I rolled a zero.”

Testing missing slots and default values:

  1. Say “Alexa, start Roll Some Dice and roll dice.”  Alexa will generate a random number between 1 and 6 and say, for example, “You said roll 1 dice. I rolled a three.”
  2. Say “Alexa, start Roll some Dice and roll a die plus.”  Alexa will generate a random number between 1 and 6 and say, for example, “You said roll 1 dice. I rolled a three.”
  3. Say “Alexa, start Roll some Dice and roll 1 die frog 3.”  Alexa will not understand and should respond, “Sorry, I can’t understand the command.”

Now we need to “stringify” those phrases and put them in our skill.json file here:

"isAvailableWorldwide": true,
"testingInstructions": "Sample Testing Instructions.",
"category": "GAME_INFO_AND_ACCESSORY",

The easiest way I’ve found to convert the text in the lists above into the correct format is in a text editor (like Notepad++) to replace the marks with \” and then MS Word to replace all paragraph marks (under Special…) with \n, and tab characters (also under Special…) with spaces.  When you’re done, the tests will look like this:

Basic Phrases:\n1. Say \"Alexa, start Roll Some Dice and roll a die.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n2. Say \"Alexa, start Roll Some Dice and roll 10 dice.\"  Alexa will generate 10 random numbers between 1 and 6 and report the results, for example, \"You said roll 10 dice. I rolled 3 ones, 4 threes, a four, a five and a six.\"\nWith a modifier:\n1. Say \"Alexa, start Roll Some Dice and roll a die plus one.\"  Alexa will generate a random number between 1 and 6 and add 1, and say, for example, \"You said roll 1 dice plus 1. I rolled a two.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die minus one.\" Alexa will generate a random number between 1 and 6 and subtract 1, and say, for example, \"You said roll 1 dice minus 1. I rolled a three.\"\nWith a modifier using synonyms:\n1. Say \"Alexa, start Roll Some Dice and roll a die add one.\"  Alexa will generate a random number between 1 and 6 and add 1, and say, for example, \"You said roll 1 dice plus 1. I rolled a six.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die subtract one.\" Alexa will generate a random number between 1 and 6 and subtract 1, and say, for example, \"You said roll 1 dice minus 1. I rolled a three.\"\nWith a modifier, testing the edge/limits we put in:\n1. Say \"Alexa, start Roll Some Dice and roll a die plus seven.\"  Alexa will generate a random number between 1 and 6 and add 7, maxing out at 6, and say, for example, \"You said roll 1 dice plus 7. I rolled a six.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die minus seven.\" Alexa will generate a random number between 1 and 6 and subtract 7, with a minimum value of 0, and say, for example, \"You said roll 1 dice minus 1. I rolled a zero.\"\nTesting missing slots and default values:\n1. Say \"Alexa, start Roll Some Dice and roll dice.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n2. Say \"Alexa, start Roll some Dice and roll a die plus.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n3. Say \"Alexa, start Roll some Dice and roll 1 die frog 3.\"  Alexa will not understand and should respond, \"Sorry, I can't understand the command.\"\n

Now you can plug the new string in the place of Sample Testing Instructions.  Like so:

"isAvailableWorldwide": true,
"testingInstructions": "Basic Phrases:\n1. Say \"Alexa, start Roll Some Dice and roll a die.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n2. Say \"Alexa, start Roll Some Dice and roll 10 dice.\"  Alexa will generate 10 random numbers between 1 and 6 and report the results, for example, \"You said roll 10 dice. I rolled 3 ones, 4 threes, a four, a five and a six.\"\nWith a modifier:\n1. Say \"Alexa, start Roll Some Dice and roll a die plus one.\"  Alexa will generate a random number between 1 and 6 and add 1, and say, for example, \"You said roll 1 dice plus 1. I rolled a two.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die minus one.\" Alexa will generate a random number between 1 and 6 and subtract 1, and say, for example, \"You said roll 1 dice minus 1. I rolled a three.\"\nWith a modifier using synonyms:\n1. Say \"Alexa, start Roll Some Dice and roll a die add one.\"  Alexa will generate a random number between 1 and 6 and add 1, and say, for example, \"You said roll 1 dice plus 1. I rolled a six.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die subtract one.\" Alexa will generate a random number between 1 and 6 and subtract 1, and say, for example, \"You said roll 1 dice minus 1. I rolled a three.\"\nWith a modifier, testing the edge/limits we put in:\n1. Say \"Alexa, start Roll Some Dice and roll a die plus seven.\"  Alexa will generate a random number between 1 and 6 and add 7, maxing out at 6, and say, for example, \"You said roll 1 dice plus 7. I rolled a six.\"\n2. Say \"Alexa, start Roll Some Dice and roll a die minus seven.\" Alexa will generate a random number between 1 and 6 and subtract 7, with a minimum value of 0, and say, for example, \"You said roll 1 dice minus 1. I rolled a zero.\"\nTesting missing slots and default values:\n1. Say \"Alexa, start Roll Some Dice and roll dice.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n2. Say \"Alexa, start Roll some Dice and roll a die plus.\"  Alexa will generate a random number between 1 and 6 and say, for example, \"You said roll 1 dice. I rolled a three.\"\n3. Say \"Alexa, start Roll some Dice and roll 1 die frog 3.\"  Alexa will not understand and should respond, \"Sorry, I can't understand the command.\"\n",
"category": "GAME_INFO_AND_ACCESSORY",

Now, with all that in place, you should be able to push the skill to Amazon again.   At a PowerShell command prompt, from the root folder of your skill, enter the following command:

ask deploy --profile "default" --target "all"

Assuming your skill deployed properly and passes all your tests, we are ready to go through the Distribution and Certification steps in the Alexa Developer Console.

On the first page of Distribution, we can upload the icon files we found earlier.  Simply add them here:

Now click Save and Continue.

Then, because we already filled all this in the skill.json, you can just click Save and Continue again.

And again.

Now, you’re at the validation stage.  Your skill should pass validation.

Click on Functional Test, then click Run.

It passes!! Your skill is ready for Submission.  Go ahead.  Click that Submit button and relax for a bit.  Depending on the promotions and backlog, your skill may or may not get reviewed within the next 1-3 business days.

That’s it for now.  


A few days later…  WOOHOO!