#TSQL2sday 171: Describe the Most Recent Issue you Closed

What happens when nothing changes, sp_send_dbmail gets given sysadmin, and you still can’t get emails.

Words: 817

Time to read: ~4 minutes

Invitation

Welcome to TSQL2sday, the monthly blogging party where we are given a topic and are asked to write a blog post about it.

This month we have Brent Ozar ( b ) asking us about the latest issue closed.

I don’t like posting about issues unless I fundamentally understand the root cause. That’s not the case here. A lot of the explanation here will be hand-waving while spouting “here be dragons, and giants, and three-headed dogs”, but I know enough to give you the gist of the issue.

sp_send_dbmail

Like most issues, even those not affecting production, this one was brought to the DBA team as critical and needed to be fixed yesterday.

A Dev team had raised that a subset of their SQL Agent jobs had failed. The error message stated:

Msg 22050, Level 16, State 1, Line 2
Failed to initialize sqlcmd library with error number -2147467259

That makes sense; the only jobs that were failing were ones that called sp_send_dbmail using a @query parameter. And I know that when you use that parameter, the code is given to the sqlcmd exe to run it for you.

Google fu

From research (most of the time, I ended up in the same post), the error fragment “failed to initialize sqlcmd library with error number” could be related to a number of things.

  1. The database object not existing
      No, the agent fails even when the query is SELECT 1;
  2. The wrong database context used
      No, SELECT 1;
  3. sqlcmd not being installed or enabled
      It was working beforehand, so I would say not.
  4. Permissions

Permissions

Well I had already tested that the query worked as long as it was valid SQL, so let’s try permissions.

I increase permissions…no luck.
I grant a bit more permissions…nope.
A bit more permissions…still nothing.
ALL the permissions… MATE, YOU’RE A SYSADMIN! WHAT ARE YOU ON ABOUT?!! …ahem… nothing.

Workaround

Strangely enough, the post mentioned that using a SQL Authentication account worked.
So we tested it using EXECUTE AS LOGIN = 'sa'; and it worked.
Which was weird, but I’ll take a workaround. Especially since it gave us time to investigate more.

Thanks to dbatools, I threw together a PowerShell script that went through all of the SQL Agent jobs that contained sp_send_dbmail and wrapped them up.

EXECUTE AS LOGIN = 'sa'; EXEC dbo.sp_send_dbmail ...; REVERT

I’m not going to share that script here cause it is a glorious mess of spaghetti code and if branches.
I gave up on regex and did line-by-line thanks to the massive combinations of what was there e.g.

  • EXEC msdb.dbo.sp_send_dbmail .
  • EXECUTE sp_send_dbmail.
  • EXEC msdb..sp_send_dbmail.
  • sp_send_dbmail in a cursor so we need to revert after each call in the middle of the cursor.
  • sp_send_dbmail where there are spaces in the @body parameter so I can’t split on empty lines.

What’s Happening?

After getting the workaround in place, the team lead and I noticed something strange.

Sure, EXECUTE AS LOGIN = 'sa'; worked, but try it as a Windows Domain login and you get something different.

Could not obtain information about Windows NT group/user '<login>', error code 0x5

Something weird was happening between SQL Server and Windows.
Great for me! I get to call in outside help.
Not great for ye! The remaining explanation is going to be shallower than the amount of water I put in my whiskey.

What Changed

Nothing!

Or so we were told. Repeatedly.
We did not believe that. Repeatedly.

Next, we started to get “The target principal name is incorrect. Cannot generate SSPI Context” on the servers.

Not being able to send emails from a SQL Agent Job is one thing, but not being able to Windows Authenticate into a SQL Instance at all is another thing all together.

Eventually, as awareness of the issue increased, the problem was narrowed down to a server configuration on a Domain Controller. I’m assuming that the name of this server configuration is “nothing”.

“Nothing” was set on one server but not the other meaning that using one DC over another meant Kerberos did something that it was not supposed to. I’m reliably informed that “nothing” has something to do with encryption and gatekeeping. I reliably replied that Kerberos should be spelt Cerberus but was ignored.

Testing

With “nothing” in place properly, the SSPI Context errors disappeared.

I reverted the workaround EXECUTE AS wrapper on sp_send_dbmail and emails started flowing again, even without the wrapper. Even with permissions reduced back to what they were.

Research

Sometimes the problem is actually outside SQL Server. Those are the good days. Other days it is a SQL Server problem and you have to fix it yesterday.

All I can do is take what happens, do a bit more research, and learn from them. That way, if it were to happen again, I can speed up the resolution process.

At least this way, if someone ever asks me if I know anything about Kerberos, I can tell them that I know about “nothing”.

I’ll enjoy that.

Initialize-Choice

Dear Host, you don’t have to use Read-Host. There is a choice

Words: 590

Time to read: ~ 3 minutes.

Interactive

I’m an advocate of automation. I don’t think automation that requires user input is the correct way to do automation. However, there are times when user interaction is wanted.

I didn’t say “needed”, I said “wanted”.

Either a case of severe risk-aversion, or being burnt by bad bots, or skirting the fine line between wanting to be involved with the work but not wanting to do all the work.

Effectively, creating a Big Red Button, with a secondary “are you sure?” screen, followed by a “last chance” pop-up. You know, the “let’s introduce extra options to make sure things won’t go wrong” approach.

Sorry, did I say I was an automation advocate? I think I meant to write automation snob.

The Question

Most of the work I’ve been doing lately has been project-orientated, production DBA work e.g. building new servers, migrating across cloud providers, enabling TDE…

…working with Distributed AGs. I think this is the first SQL Server technology about which I swing from love to hate about in equal measure.

So, automation is nearly non-existent right now on my tasks. But, when I got asked if there was an easier way to get user input; taking into account bad answers, shortcuts, default options, etc., rather than using a plethora of Read-Host commands and double-checking the user input, well…

I didn’t know.

But, thankfully, I speak to people who are more experienced and smarter than I am when it comes to PowerShell, and I remembered them mentioning $Host.UI.PromptForChoice() before.

Initialize-Choice.ps1

Here’s the link to the code. I’m going to have to ask you to excuse the formatting. I’ve forgotten how to embed the code in an aesthetic way and have misplaced my desire to research how again.

https://github.com/shaneis/RandomScripts/blob/28e2c96e08de0e863134962f7c120c5ffc97abc0/Initialize-Choice.ps1

What this provides us with is a short-hand way to generate a choice option for the user, that will not accept answers outside of the given list.

Pudding

$Choices = [ordered]@{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-Choice -Title 'Test 01' -Choice $Choices

Nice and simple.

The choices that we laid out have been given a shortcut number, and their ordering has been preserved. Any user input outside of the choices are automatically rejected, we can add descriptions for extra help, and we’ve even thrown in a “Cancel” choice as well.

In fact, ordering is the only thing left to mention about this function. It’s why it’s written the way it is and why it only accepts hashtables with the [ordered] accelerator on them.

Here’s how it works when I first wrote it using only hashtables.

$Choices = @{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-ChoiceUnordered -Title 'Test 01' -Choice $Choices

4, 2, 3, 1! Debugging this would sure be fun!

Option 1 is still option 1.
But now Option 3 has gotten jealous and pushed Option 2 down.
While Option 4 is just happy to be included.

Seeing as any code that parses the choice relies on the user picking the right option and the choices being in a determined order, it seemed like a bug to not have the input order preserved.

Sin é

So that’s it.

Feel free to use and abuse it. I would prefer that you use and improve it but it’s your party, you can do what you want to.

It’s your choice.

T-SQL Tuesday #164: Code That Made You Feel A Way

Time to read: ~ 4 minutes

Words: 899

Welcome to T-SQL Tuesday, the monthly blogging party where we receive a topic to post.

This month we have Erik Darling ( blog ) asking us to post about “code that makes [us] feel a way“.

A while back, I was tasked to performance tune some code that brought me through all five stages of grief. It’s best to take you through the events from the developer’s viewpoint. There’s less cursing that way.

Denial

Hey, what’s up? You were asked to troubleshoot some code. Not a big deal; these requests come in from time to time. 

You have code that keeps coming up as a high consumer of CPU on our systems? OK?

It had gotten to the stage where it kept appearing in sp_WhoIsActive? What’s that? And should it not be “whom is active”? Well, agree to disagree.

Let’s see the code so. Wow, that’s a small scroll bar! Yeah, that’s one of ours.

No, it is. I swear it is.

I’m not surprised you can’t find it in stored procedures; we dynamically create it. Here’s the procedure. Yeah, all 900 lines! Pretty cool, huh?

What do you mean, why? We had to! We had to react to the different parameters that get passed in.

Anger

Alright, alright! Calm down. I misspoke.

Yeah, that’s it, breathe. There aren’t different parameters. It’s a parameter. Singular.

No, we still need to generate the query dynamically. Cause the XML could be different. Huh? Yeah, the parameter is XML; it could be anything in there. So we pass the XML in as a nvarchar(max) string.

You spilt some of your coffee there when you banged the desk.

Then we use sp_xml_preparedocument at the start, but then we have to use sp_xml_removedocument at the end.

You’ve never heard of those before? I thought you were a DBA?

We use the “prepare” to get the XML into a state that’s easier to consume, and we use the “remove” so we don’t get memory leaks!

Your face is getting a bit red, did you know that?

It’s SQL Server; it uses enough memory it can share some!

Did we read the docs? No, why? It can use one-eighth of the total memory available for SQL Server. Ah, but that’s “can”, not “will”.

Yes, yes, cursors upon cursors upon cursors. Why? We don’t know how many values are in each XML node in the params, so we have to split them out.

We then join them into a comma-delimited string, which is then used in IN clauses… woah! Is that the sound of your teeth grinding? You know that’s not good for your health. What do you mean neither am I?

Anyway, then we parse all of that down to a massive, what do you call it, swiss-army knife, lego-block, dynamic query built based on what’s passed in. You don’t call it that? I thought you did. What do you call it so? Wow, that’s not a word I had heard of before.

It’s not too bad, though! We pass in everything as literal values, so it’s faster that way. We read up on that parameter sniffing issue you said can happen. That isn’t going to catch us out here!

Modern problems require modern solutions, as they say. What’s that? It’s not a modern problem? I’m a what? Are you allowed to call me that?

Bargaining

You want me to re-write it? You’ll help? It’ll take a lot of time. It’s not really a priority.

Plus, it’s not even worth it. How much CPU? Is that a “illion” with an “m” or “illion” with a “b”? Per run? Is that a lot? Well, I really wouldn’t know, now would I? 

Yeah, good point; you’ll also have to check how much memory it uses. Hey! Probably one-eight of the total memory available for SQL Server. Alright, it wasn’t that bad a joke; everyone’s a critic!

You’ll give me credit? But it’s fine in Dev; there’s no point. I suppose we could remove some of the WHERE OR IS NULL branches since we know if they’re empty when we create the query.

Nah, that would make the procedure a bit too big. Anything over 1000 lines is too much. Well, 985 is still less than 1000!

Depression

Why are you crying? No, we care, we do, we really do. No, not all the code is like this; this is old. Yes, we stopped using that XML parameter pattern. Yes, and the cursors. Well, most of them, at least. 

Your tears are mixing with the coffee you spilt, you know that? 

Here, there’s really no need for the fetal position in the middle of the office. You have yet to see the code for the busier databases.

You know what, why don’t you go home and come back in tomorrow, and I’ll go over that way, OK?

Acceptance

Hey! There’s our favourite DBA. You seem a lot cheerier.

Oh, you don’t have to perf-tune that query anymore? That’s great; see, it wasn’t so bad. What’s that? The latest release failed? It ran fine in Dev.

You’re permitted to set up a QA environment to test performance before code gets released? What kind of code? Any stored procedure?

Ah, that’s OK. No, really, it’s fine. We started using ORMs and embedding SQL into the app a while ago. This won’t affect us.

You’ve started crying again.

T-SQL Tuesday 161: Having Fun with SQL

Words: 1,283

Time to read: ~ 7 minutes

Welcome to T-SQL Tuesday, the monthly blogging party where a host provides a topic, and we blog about it.

This month we have Reitse Eskens ( blog | twitter | mastadon ) asking us to talk about something fun we have done with T-SQL.

It has been quite a while since I have coded just for fun, so I’m thankful to Reitse for suggesting this. Unfortunately, I don’t have a pre-baked idea for this T-SQL Tuesday, so let’s see what we can come up with.

Echos

Around December 2021, Wordle hit the virtual scenes. Yeah, nearly two years ago. How do you feel about that?

I got swept up in that wave for a while in the same way I got swept up in the other trends of my time, like Pokemon, Sodoku, and Angry Birds.

Eventually, I stopped when I found a PowerShell script by Kieran Walsh ( github | twitter ) where you could narrow down to the correct answer by putting in the results of your guess each round.

This hack led to me realising how much time I was spending on Wordle and that I should stop, much like I did with Pokemon, Sodoku, and Angry Birds.

So, what better thing to do than to try and recreate that PowerShell script in T-SQL

Rules

I must recreate as much of the script as possible in T-SQL in only one hour.
Yes, I’m aware that’s more of a rule than rules but Wordle needs five letters dammit, and “rule” was crying out for that prosthetic “s”!

Total (code)

Don’t worry, you just have to fill in the variables on lines 19-26.

Split

A few things need to be taken care of out of the bat.

The potential answers have to be stored somewhere in the database. Thankfully, I had the answers in a text file, so creating a table and then inserting them was easy.

I could do the insert with flat files, but I already have PowerShell open so…

$WordleAnswers = Get-Content -Path $HOME\Documents\wordle-answers-alphabetical.txt |
    ForEach-Object -Process {
        [PSCustomObject] @{
            WordleAnswer = $_
        }
    }

$WriteTableParms = @{
    SqlInstance = 'localhost\SQL2019'
    Database = 'Wordle'
    Schema = 'dbo'
    Table = 'WordleAnswers'
    ColumnMap = @{
        "WordleAnswer" = "wordle_answers"
    }
}
$WordleAnswers |
    Write-DbaDataTable @WriteTableParams


Next, we need the variables that we can create. If I can finish this before the 1-hour mark, I’ll turn this into a stored procedure with parameters and everything! Until then, it’s script and variable times.

DECLARE
	@known_letters AS varchar(5),
	@excluded_letters AS varchar(26),
	@position1 AS char(1),
	@position2 AS char(1),
	@position3 AS char(1),
	@position4 AS char(1),
	@position5 AS char(1),
	@correct_letters AS xml,
	@all_answers_sql AS nvarchar(MAX);

/* region Enter Variables here */
SET @known_letters = '';
SET @excluded_letters = '%[]%';

SET @position1 = NULL;
SET @position2 = NULL;
SET @position3 = NULL;
SET @position4 = NULL;
SET @position5 = NULL;
/* endregion Enter Variables here */

The PowerShell code has known_letters, excluded_letters, positions, and wrong_positions.

I can do all these easily enough, except for wrong_positions. I can’t think of a way to do hashtables in SQL that doesn’t equal a secondary table or user-table type, etc. I’ll leave that to the end if I have time.

known_letters is an array of strings. I haven’t updated the SQL Server version on my laptop in a while, so there is no string_split for me. Let’s do the XML way so.

/* region KnownLetters */
SELECT @correct_letters = CONCAT(
	'<known_letters>',
	REPLACE(@known_letters, ',', '</known_letters><known_letters>'),
	'</known_letters>'
);

SELECT
	[known] = [l].[y].value('.', 'char(1)')
INTO #KnownLetters
FROM
(
	VALUES
	(@correct_letters)
) AS [x] ([kl])
CROSS APPLY [kl].nodes('/known_letters') AS [l] (y);
/* endregion KnownLetters */

excluded_letters I can get away with by using some LIKE jiggery-pokery, where it will search for any characters between the square brackets.

positions I can split out into individual variables. I can more easily deal with them then, and it only ends up as an extra five variables this way.

Creating the table would have been handier if I had made a column for each character, but I didn’t, so it’s some SUBSTRING logic for me to get the individual characters out.

SELECT
	[wa].[wordle_answers],
	[g].[char1],
	[g].[char2],
	[g].[char3],
	[g].[char4],
	[g].[char5]
FROM [dbo].[WordleAnswers] AS [wa]
CROSS APPLY (
	VALUES (
		(SUBSTRING([wa].[wordle_answers], 1, 1)),
		(SUBSTRING([wa].[wordle_answers], 2, 1)),
		(SUBSTRING([wa].[wordle_answers], 3, 1)),
		(SUBSTRING([wa].[wordle_answers], 4, 1)),
		(SUBSTRING([wa].[wordle_answers], 5, 1))
	)
) AS [g] ([char1], [char2], [char3], [char4], [char5])

If we do know the positions of some of the letters, then I can strip out a lot of the potential answers straight away. I’m not a fan of Swiss-army-knife WHERE clauses, so I’ll do the dynamic SQL.

I’m also not a fan of WHERE 1=1 in my dynamic code, but I’m running low on time here, and it’s faster to add that in first and start everything else with an AND than it is to check if this is the first clause in the WHERE section or not.

Plus, I’m less against WHERE 1=1 than I am against Swiss-army-knife WHERE clauses.

/* region Known Positions */
CREATE TABLE #AllAnswers
(
	[wordle_answers] char(5),
	[char1] char(1),
	[char2] char(1),
	[char3] char(1),
	[char4] char(1),
	[char5] char(1)
);
SET @all_answers_sql = N'SELECT
	[wa].[wordle_answers],
	[g].[char1],
	[g].[char2],
	[g].[char3],
	[g].[char4],
	[g].[char5]
FROM [dbo].[WordleAnswers] AS [wa]
CROSS APPLY (
	VALUES (
		(SUBSTRING([wa].[wordle_answers], 1, 1)),
		(SUBSTRING([wa].[wordle_answers], 2, 1)),
		(SUBSTRING([wa].[wordle_answers], 3, 1)),
		(SUBSTRING([wa].[wordle_answers], 4, 1)),
		(SUBSTRING([wa].[wordle_answers], 5, 1))
	)
) AS [g] ([char1], [char2], [char3], [char4], [char5])
WHERE 1=1';

IF @position1 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char1] = ',
	QUOTENAME(@position1, '''')
);

IF @position2 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char2] = ',
	QUOTENAME(@position2, '''')
);

IF @position3 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char3] = ',
	QUOTENAME(@position3, '''')
);

IF @position4 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char4] = ',
	QUOTENAME(@position4, '''')
);

IF @position5 IS NOT NULL SET @all_answers_sql = CONCAT(
	@all_answers_sql,
	N'
	AND [g].[char5] = ',
	QUOTENAME(@position5, '''')
);

SET @all_answers_sql = CONCAT(@all_answers_sql, N';')

PRINT @all_answers_sql;

INSERT INTO #AllAnswers
EXECUTE [sys].[sp_executesql] @stmt = @all_answers_sql;
/* endregion Known Positions */

Finally, we can UNPIVOT the individual characters for the words and join them with the known_letters to single down to those answers.
As well as excluding characters that we know aren’t in the word.

Or else just return everything we have, minus excluded characters.

IF LEN(@known_letters) > 0 BEGIN
	SELECT
		*
	FROM #AllAnswers AS [w]
	UNPIVOT 
	(
		[chars] FOR [chr2] IN ([w].[char1], [w].[char2], [w].[char3], [w].[char4], [w].[char5])
	) AS [unpvt]
	JOIN #KnownLetters AS [kl]
		ON [unpvt].[chars] = [kl].[known]
	WHERE
		[unpvt].[wordle_answers] NOT LIKE @excluded_letters
END
ELSE
BEGIN
	SELECT
		*
	FROM #AllAnswers AS [a]
	WHERE [a].[wordle_answers] NOT LIKE @excluded_letters;
END;

Guilt

In the PowerShell script, you can add characters in the excluded_letters parameter that exist in the known_letters parameter, and it will correctly ignore them.

Alas, Tempus fugit and I didn’t get to do the same for this T-SQL version. Maybe that has something to do with translating “time flies” into Latin and then looking up other sayings in Latin, but we can’t say for sure. Mea culpa!

However, it’s been around 50 minutes with minor troubleshooting here and there, so time to test this bad boy.

Tests

Let’s hop over to https://wordleplay.com/ and test it out.

I’ll take the first answer returned each time unless it is the answer we chose previously.

PowerShell

I’m not going to use the wrong_positions parameter here since I didn’t re-create that in T-SQL. Still, I got lucky and got the correct answer on the third guess

T-SQL

The T-SQL method doesn’t show each iteration as well as the PowerShell does. And, there’s more human brain power required to make sure you don’t enter the same letter in the known_letters and the excluded_letters variables.

Overall though, well done with a respectable four guesses

Point

I’m not going to say that there is no point to these exercises.

Fun is a valid a point as any other. In a work world filled with more demands on our time than the number of Pokemon (there’s still only 150, right?), more technologies to learn than combinations in Sodoku puzzles, and more people demanding the seemingly impossible out of you so that you want to yeet them at solid objects … something something Angry Birds, it’s a welcome change to do something just for fun once in a while.

Thanks Reitse

T-SQL Tuesday 157 – End of Year Activity

Words: 544

Time to read: ~ 2 minutes

Read-Clipboard

Welcome to T-SQL Tuesday, the monthly blog part where we are given a topic and asked to blog about it. This month we have Garry Bargsley, asking us to discuss end-of-year activities.

Here

My current workplace has a change freeze in place for the end of the year. Usually, I would say that this is risk-averse.
But seeing as the nature of the business (payments) means that the Thanksgiving/Black Friday/Christmas time is the busiest time of the year, I’m willing to cut them some slack.

So, what to do when we cannot deploy to production? Oh, we’ll still be busy! There are always management-approved fixes that get through, annual processes to complete, and project planning that has to be…well, planned.

But, my priority for this end-of-year is documentation.

Docs

We have a few different tools for documentation. Examples are Confluence, Google Sheets, Google Docs, etc.

But most of the time, documentation takes the form of scripts saved to source control.

These scripts are multiprocess and cross-team dependent and can quickly end up like me doing DIY. One hand trying to steady the nail, the other wielding the hammer, and the whole situation collapsing into swear words and tears.

We can’t currently have a “hit-Enter-and-leave-it” bunch of scripts because we have to stop midway for another team’s work or to check the results of what we’ve just run.

Notebooks

If we used Notebooks, this would be so much easier. I could create the code, save a snippet of the results to the notebooks, and then future executors could see what to expect.

No-books

We don’t use Notebooks.

Plain .sql files for me, it is! 

To ease the documentation burden and have some semblance of tidiness, I created a PowerShell tool that splits it out “all pretty like”.

Format-TextTable

Now, with a combination of Read-Clipboard from the ImportExcel module, I can grab results and turn them into a text table that I can add back into the script.

Simple example: we want to save the database name, create date, and compatibility level of the databases on an instance.

… I said simple example – not good or useful example.

SELECT 
	database_name = name,
	create_date,
	compatibility_level
FROM sys.databases;
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows

Now, let’s copy and paste the results into a comment on that script as a form of “documentation”.

Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that all have different spacing and are not aligned

Now, this time I’m going to copy those results, and run the following code in my PowerShell Core window, before pasting into the comment block.

Read-Clipboard | Format-TextTable | Set-Clipboard
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that are wrapped in a text table format and aligned under their header name

Given the option, I know which one I’d choose.

Code

You can pick up the source-code here.

Feel free to check it out and add/remove from it.

Set-Clipboard

I am aware that I could spend minutes of my time to evenly align the spaces. And, yes, I know that SSMS can do find-and-replace using simple Regex.
But if I manually structure the output every time, for all different types of columns & data types, with different spacings…

I’d be better off arguing with the company to start using Notebooks.

Until then, this works for me. Hopefully, it will work for you.

And hey, maybe it will help improve your documentation in the New Year. That has a higher chance of happening than my one to improve my DIY skills.

T-SQL Tuesday #155 – Write to Read, Not to Run

Words: 861

Time to read: ~ 5 minutes

Welcome to T-SQL Tuesday, the monthly blogging party where we talk about a topic given to us by the host. This month, we have Steve Jones (blog | twitter) asking us about Dynamic SQL.

There are a myriad number of uses for Dynamic SQL – I’ve already read some of the published posts and I’m impressed with the range. (Yeah, I’m writing this late; I’m aware).

I’m aiming for something different. I want to talk about the things I believe Dynamic SQL should have. You can disagree with me if you’d like; I’d welcome it (as long as you can justify it) but here are my thoughts on writing Dynamic SQL.

DECLARE

To make a contrived example, I’ve taken Andy Mallons (blog | twitter) script to return SQL Agent Job statuses, and converted it to Dynamic SQL

DECLARE
@job_sql AS nvarchar(max),
@job_name AS nvarchar(128) = N'syspolicy_purge_history', — = N'No existy',
@debug_mode_on AS bit = 1; — 0;
DECLARE
@nl AS nchar(2),
@actual_job_name AS nvarchar(128);
/* Newline for formatting */
SET @nl = NCHAR(13) + NCHAR(10);
/* Job existence check */
IF @job_name IS NOT NULL
BEGIN
SET @actual_job_name = (
SELECT
[name]
FROM msdb.dbo.sysjobs
WHERE
[name] = @job_name
);
IF @actual_job_name IS NULL
BEGIN
DECLARE @err_msg AS nvarchar(max);
SET @err_msg = FORMATMESSAGE(N'Cannot find any job labelled: %s', @job_name);
RAISERROR(N'%s', 0, 1, @err_msg) WITH NOWAIT;
RETURN
END;
END;
SET @job_sql = N'USE msdb;
SELECT TOP (1)
is_running = CASE
WHEN ja.job_id is NOT NULL AND ja.stop_execution_date IS NULL
THEN 1
ELSE 0
END,
last_run_time = ja.start_execution_date,
next_run_time = ja.next_scheduled_run_date,
last_job_step = js.step_name,
job_outcome = CASE
WHEN ja.job_id IS NOT NULL AND ja.stop_execution_date IS NULL
THEN N''Running''
WHEN jh.run_status = 0
THEN N''Failed''
WHEN jh.run_status = 1
THEN N''Succeeded''
WHEN jh.run_status = 2
THEN N''Retry''
WHEN jh.run_status = 3
THEN N''Cancelled''
END
FROM dbo.sysjobs AS j
LEFT JOIN dbo.sysjobactivity AS ja
ON ja.job_id = j.job_id
AND ja.run_requested_date IS NOT NULL
AND ja.start_execution_date IS NOT NULL
LEFT JOIN dbo.sysjobsteps AS js
ON jh.job_id = js.job_id
AND js.step_id = ja.last_executed_step_id
LEFT JOIN dbo.sysjobhistory AS jh
ON jh.job_id AND j.job_id
AND jh.instance_id = ja.job_history_id;'
/* Add filter: job_name */
IF @actual_job_name IS NOT NULL SET @job_sql = CONCAT(
@job_sql, @nl,
N'WHERE
j.[name] = @ds_job_name'
);
/* Add sorting */
SET @job_sql = CONCAT(
@job_sql, @nl,
N'ORDER BY
ja.start_execution_date DESC;
'
);
IF @debug_mode_on = 1
BEGIN
RAISERROR(N'%s%s', 0, 1, @job_sql, @nl) WITH NOWAIT;
END;
ELSE
BEGIN
EXECUTE [master].[sys].sp_executesql
@stmt = @job_sql,
@param1 = N'@ds_job_name AS nvarchar(128)',
@ds_job_name = @actual_job_name;
END;
view raw JobStatus.sql hosted with ❤ by GitHub

I was going to apologise for how long and messy it looked but I realised that’s par for the course with Dynamic SQL.

SET

There is this maxim that I’ve heard bandied about regarding code:

Code is read much more often than it is written[…]

https://devblogs.microsoft.com/oldnewthing/20070406-00/?p=27343

There should be an addendum on that quote for DBAs:

DBAs troubleshoot Dymanic SQL more often than they write it

Probably others, but definitely me

Saying that, Dynamic SQL should have these items to help with that effort.

Proper Formatting

It’s extremely easy to write Dynamic SQL so that it comes out in one string. A hodge-podge of plus signs, variable assignments, and red text that sometimes it seems like a foreign coding language

>++++++++[<+++++++++>-]<.>++++[<+++++++>-]<+.+++++++..+++.>>++++++[<+++++++>-]<+ +.------------.>++++++[<+++++++++>-]<+.<.+++.------.--------.>>>++++[<++++++++>–
]<+.

The above code sample is apparently a working “Hello World” program in one of those languages.

Don’t do this, properly format your Dynamic SQL. It will help when the code shows up in your monitoring toolkits. You have them (and know how to use them), right?

To help check your formatting, Dynamic SQL should include…

A Debug Method

Troubleshooting is so much easier when you know what you are going to run. The amount of Dynamic SQL where you have to build that in your head while reading the code is ridiculous!

If you give a procedure filled with Dynamic SQL to a junior DBA, then you’re going to see how fast fear and resignation fills someones face.

It’s straightforward to create a debug method that shoots out the code that is going to be run. Secondary benefit is it ensures that you format your code properly because you can see how it is going to turn out

IF @debug_mode_on = 1
BEGIN
	RAISERROR(N'%s%s', 0, 1, @job_sql, @nl) WITH NOWAIT;
END;
ELSE
BEGIN
	EXECUTE [master].[sys].sp_executesql
		@stmt = @job_sql,
		@param1 = N'@ds_job_name AS nvarchar(128)',
		@ds_job_name = @actual_job_name;
END;

A.O.B

There are a few other things that I like to add to Dynamic SQL but I will grant are not necessary. I’ll leave them to you to make up your own minds about.

Sanitise inputs

If the user passes in an object, ensure it’s there

/* Job existence check */
IF @job_name IS NOT NULL
BEGIN
	SET @actual_job_name = (
		SELECT
			[name]
		FROM msdb.dbo.sysjobs
		WHERE
			[name] = @job_name
	);

	IF @actual_job_name IS NULL
	BEGIN
		DECLARE @err_msg AS nvarchar(max);

		SET @err_msg = FORMATMESSAGE(N'Cannot find any job labelled: %s', @job_name);
		RAISERROR(N'%s', 0, 1, @err_msg) WITH NOWAIT;
		RETURN
	END;
END;

Escaping input

Same point really – if you want to raise a warning or error with what is passed in, use something that escapes user input.

I tend to use FORMATMESSAGE for these bits.

EXECUTE

These might appear to be overkill but I have an cough contrived cough example; the code I took from Andy’s github and converted to Dynamic SQL!

Quick Test with Debug

If we pass in no job name and leave debug mode on; it splits out the code sans any WHERE clause before the ORDER BY to bring us back all jobs:

If we change up the job name to one that does not exist e.g. @job_name = N'No existy'; then we verify that the job doesn’t exist and error out:

Passing in a job that does exist, then adds that job name to the WHERE clause (parameterised, of course):

Let’s Run It!

Turn off debug mode and…

That’s throwing up an error message – but I’d be confident in saying that anyone glancing over the code in the gist would miss the three mistakes.

Yes, three of them – I wrote one on purpose but created two more by accident.

Luckily we can turn debug more back on, grab the output, and parse it in a new window:

The AND on the JOIN clause was intentional – the terminating semi-colon before the WHERE clause was not, neither was the mistake on the alias. The joys of re-writing code.

Fix these mistakes up in our Dynamic SQL, turn debug mode back off, and ready to re-run? Probably took around 5 minutes going slow, but now when we re-run the code:

Result Set

That’s one example of why I have believe Dynamic SQL should have proper formatting, and debugging, and some small others.

I had more examples but they normally come to me around 02:00 in the morning while I’m on swearing and sweating on an incident call.

I’d prefer to fix and forget them, after I make sure they don’t happen again.

Publishing PSTimeSheets To The PSGallery

Words: 567

Time to read: ~ 3 minutes

Skip This for Your Sanity

I have created a module
cause nobody wants to do timesheets no more;
they want PowerShell to do it for ya.
Well, if this is what you need,
then this is what I’ll give ya.
(Ahem, apologies about that, songs get stuck in my head sometimes).

I Confess

I’ve worked with PowerShell for years but have never published a module before. I’ve helped write changes to a few, e.g. dbatools, dbachecks, and a few internal ones.

But the actual creating and publishing one needs adding to my experience list.

There was a lot of gnashing of the teeth, wailing of the cries, and reading of the documentation.

There were a few things that I wanted to do before I published.

Creating tests against all the functions; done.
Creating documentation for all the functions; done.

These were the easy sections; publishing the module was where I encountered the speedbumps.

So here’s a quick list of the aspects that threw me for a loop.

.PSD1 vs .PSM1 Files

I’m aware that the auto-loading of PowerShell modules boils down to a combination of the PSModulePath environment variable ($ENV:PSModulePath) and the .psm1 file for the module. But is there a default way to auto-generate that file?

I thought it was using the New-Module Manifest command, but nope, that creates the .psd1 file. At least I don’t have to worry about that.

The best practice is not to auto-load everything into the .psm1 file. It’s supposed to be more performant to re-create the functions’ definitions there. That’s not what I did.

Publishing

First of all, yes. Anyone can publish to the PSGallery – you need an account.

Did I know that you needed an account? Hell no.
Did I find out? Hell yeah.

To be fair, they say as much when you try to publish the module, asking you for a NuGetApiKey. Pop open your profile in PSGallery, and you can generate it from there.

Missing Values in the .PSD1 File

Remember a few paragraphs ago when I said I didn’t have to worry about the .psd1 file? Yeah, I was wrong. The command New-ModuleManifest is excellent. But, a few key features get missed from the default options.

The Description field doesn’t have an entry, yet it’s a required key to publish a module. Simple enough to open a text editor and update the values there; simple, if annoying.

This next bit is on me: after you have filled out the description field and tried to publish the module, you will get the same error message. That’s because the description field, starting off empty, will also be a comment. Re-open the editor, remove the hash/pound/octothorp that makes the field a comment, save, and you should be good to go.

NodeJS, I Think?

There were other tangles with the Publish-Module command that pushes to the PSGallery. I’ve chalked them down to a sinister combination.

The Linux knowledge needed for troubleshooting vs the amount of Linux knowledge I had.

I switched out of my WSL and tried to publish from my Windows Desktop. It went as smooth as… a very smooth thing.

Return 0

Overall, it was a simple process made more difficult due to lack of experience. Easy enough for anyone to pick up, annoying but unmanageable. Would I do it again?

Well, I’ve got improvements to make to PSTimeSheets, so… yeah!

T-SQL Tuesday #153 – The Conference That Changed Everything

Words: 585

Time to read: ~ 3 minutes

Welcome to T-SQL Tuesday, the monthly blogging party created by Adam Machanic, maintained by Steve Jones (twitter | blog), and hosted this month by Kevin Kline (twitter | blog).


This month, the topic is:


Tell us the story of how attending an IT conference or event resulted in an amazing career or life opportunity.


I have gone to my fair share of conferences, especially since I still think I’ve only been in the community for a relatively short time.
Less than a decade so far.
This duration is a guess, though, because I don’t want to take a more extended look if that number is too low and I start having scary thoughts.

The idea of picking out a single IT conference is also a scary thought.
The number of standout events is more than any professional should be lucky enough to have.
In any event that I have gone to, the talks have been more than knowledgeable; they have been inspiring.
To leave a conference without a renewed energy and motivation to dive into the plugged tech is rare, and it only happened to me once when I went after flying & driving for more hours than I had slept in the previous days.

So it’s not the tech side of events that I want to extoll; it’s the “softer” side, the people & the memories.
I will be cheeky and say that my answer for this T-SQL Tuesday will be all of them, but for different reasons.


SQL Pass for meeting up with the dbatools team and meeting with Jess Pomfret (twitter | blog), Chrissy LeMaire (twitter | blog), Constantine Kokkinos (twitter | blog), Andy Levy (twitter | blog), and John G Hohengarten 👞 (twitter).


SQL Bits, as an attendee, for meeting up with Mötz Jensen (twitter), Dan Alexander (twitter) and Rob Sewell (twitter | blog).
SQL Bits, as a volunteer, for joking and laughing with Neil Holmes (twitter) outside the event to keep warm and realising that Erik Darling (twitter | blog) despairs my taste in whiskey.


Data Grillen, for meeting a great man in Reitse Eskens (twitter | blog), being dazzled by Ben Weissman (twitter | blog) & William Durkin (twitter), and finding out that Hamish Watson (twitter | blog) isn’t as cool as I thought; he’s cooler.


SQL Saturday Boston, for realising that Andy Mallon (twitter | blog) is a fantastic person to be able to organise, invite, welcome, and help host an all-day event with aplomb.


SQL Saturday Cork, for having non-tech chats on a cricket pitch with Kevin Kline (see starting paragraph) and Sander Stad (twitter | blog) on a rare sunny Irish day, talking about random topics we realised all interested us.


Data Scotland, for meeting up with people I had come to know online but not had a chance to meet face to face; Brett Miller (twitter | blog), Craig Porteous (twitter | blog), Paul Broadwith (twitter | blog), Johan Ludvig Brattås (twitter), Cathrine Wilhelmsen (twitter | blog), and John Armando McCormack (twitter | blog).


And for any PowerShell conference, for enlightening me to the genius of Mathias Jessen (twitter | blog), Adil Leghari (twitter), Chris Gardner (twitter | blog), and Adam Russell (twitter | blog).

I will have to stop before the memories stretch out, and I get caught up in them all evening. There are too many people to name in a short blog post, and the amount of memories makes me re-think the amount of time I’ve been in IT, leading to Scary Thoughts again.

The Conference that changed everything? All of them, and hopefully each one that will have me next.

Querying SQL Server with Golang

Words: 333

Time to read: ~ 2 minutes

I’ve been looking into Golang recently. Hey, everyone has to go with what interests them!

It’s all very well and good to open up Golang and write a FizzBuzz (note to self: write a FizzBuzz), but I still work with databases.

So before I do anything with Golang, I’d like to know: can it interact with databases. 

Granted, everything can, but how easy is it? And does trying to interact with databases kill any interest I have in the language.

So, let’s give it a go.

First, some caveats.

  1. You have to install Go (shocking, I’m aware).
  2. This post is effectively me reading the tutorials on the Golang website and converting it to work with SQL Server.
  3. You have to do some funky stuff to get the driver for SQL Server; granted, funky things basically mean run go get <Github link>. I’ve done funkier…
  4. There’s also some better way to do this, but who cares. This is exploratory.
  5. When setting up my SQL instances, I did some weird thing where I now have to specify a non-standard port when connecting. So I have to open up the error log in SQL Server, see what port it’s listening to, and put that into the code. Yes, I’m aware that I’m a DBA, and I should be able to fix this, but it’s my personal time & my personal test instance. I’ll fix it when I want to.

Anyway, here’s the code. 

Do with it what you will; I’m just happy it works, and I’m delighted that I’m excited about it!

package main
import (
"database/sql"
"fmt"
"log"
// this is the driver & yes, even though we're throwing
// it away to `_`, it's still needed
_ "github.com/denisenkom/go-mssqldb"
)
var (
db *sql.DB
server = "localhost"
// Yes, I hate this too…
port = 60515
)
func main() {
// Build connection string
connString := fmt.Sprintf(
"server=%s;port=%d;Trusted_Connection=True;AppName='Golang'",
server,
port,
)
// I want the line that the error occurs in my logs, please & thank you
log.SetFlags(log.Lshortfile)
// Create connection pool
var err error
db, err = sql.Open("sqlserver", connString)
if err != nil {
log.Fatalln("Creating connection pool failed:", err.Error())
}
stringDate, err := getDate()
if err != nil {
log.Fatalln("getDate() failed:", err.Error())
}
fmt.Println("Read:", stringDate)
fmt.Println("Read rows successfully!")
fmt.Println("Finished.")
}
func getDate() (string, error) {
var dte string
// We're "scanning" the values from the query into the `dte`
// variable using `&<var name>`. Different, but interesting…
if err := db.QueryRow("SELECT dt = SYSDATETIME();").Scan(&dte); err != nil {
return "false", fmt.Errorf("getDate() function failed! %s", err.Error())
}
return dte, nil
}
view raw db.go hosted with ❤ by GitHub

First of all, what happens when it works?

go run main.go

And what happens when it fails, say cause the default port actively refuses connections?!

// changing `port = 60515` -> `port = 60514`

Happy out! Now to see what else we can do. Thankfully, I’ve somehow managed to get some ideas marinating in creative juices. Let’s see if I have the time and the energy to see them through!

T-SQL Tuesday #152

Words: 375

Time to read: ~ 3 minutes

I don’t know if this post is a rage against “database-ubiquitous” SQL or Object-Relational Mapping tools (ORMs)

I’m going to say that it’s a rant against the first because if you know what you are doing, then ORMs won’t be a performance concern

But people use ORMs instead of Stored Procedures because
Them: “what happens if we need to change databases? We don’t have to worry about that now cause they’ll be baked in the application, and we won’t have to recreate the Stored Procedures in the new database cause ORMs just work!”

Everything just works until it doesn’t

I’ve worked with several companies and heard this argument many times from developers

Part of why I’m ranting about database-ubiquitousness SQL instead of ORMs is that I’ve also witnessed SQL code that has to work everywhere

I’m going to call this SQL “DUh SQL” from now on since I only have so many vowels on my keyboard and shouldn’t waste them re-writing the word “ubiquitousness”

Them: “Nope, we can’t use DATEFROMPARTS; it’s not available in other databases! Yeah, we use GETDATE() everywhere; what’s your point? WHAT!?!”

I’ve moved across different cloud providers more than I’ve moved databases, and guess what? Each cloud provider move, we’ve kept the same database

I know that knowledge sharing “in the wild” says that you should use ORMs or DUh SQL
The route to a 10x engineer is paved in DRY, KISS, and SWALK
Well, maybe not SWALK

Them: “But the leading minds out there say that you should keep it DUh since that means it’ll be a breeze if we have to move databases”

Sure, and I’ve been told that ice baths help in recovery
I’m not going to take them, though, since I know there’s a difference between me trying to do a 5k and dedicated athletes!

I love arguing with people in my blog posts; I rarely lose

So, don’t use DUh SQL cause of reasons that may not apply to you
Don’t refuse performance tuning efforts if they’ll add database-specific code
And, please, learn to use your tools

Oh, and apologies for the lack of full-stops, I can’t use them in case I have to turn this post into regex