Time to read: ~ 4 minutes
I pity people who have to write acceptance speeches…who do you leave out?
I pity people who have to write acceptance speeches…who do you leave out?
Time to read: ~ 4 minutes
Ah T-SQL Tuesday, is it that time again? And the 94th one at that! Wow!
Time to read: ~4 minutes
Update: 2017-09-20 Thanks to Rick Fraser for pointing out I showed a $ServerConnection but hadn’t defined it in the function or separately or at all! Thanks Rick!
TSQL Tuesday, the brain-child of Adam Machanic ( blog | twitter ), has come around once more and this time it is being hosted by Kendra Little ( blog | twitter ). The topic? Interviewing Patterns & Anti-Patterns.
… I have not had that many interviews. A combination of not being that long in the working community since college and staying at the same company for quite a while means that it’s just not something at which I’ve had a lot of practice.
I suppose that I should do one or two, if not to look for a new place to work, then to practice them, see how I measure up, and test my skill.
Mainly though to answer some questions that I have.
Questions about interviewing for DBAs.
I hope you don’t find them too boring or basic.
This question is one that I couldn’t really wrap my head around. How do you interview a DBA? If the purpose of an interview is to evaluate a candidate for a position then how do you measure them?
Technical wise, what do you do? Most of the interviews that I’ve been to have involved some aspect of testing, but the thing is there are different types of DBAs, all to do with what they focus on.
Do you judge a DBA, who is focused on Virtualisation, on the intrinsics of SQL Internals?
Or a DBA, focused on Azure, on their knowledge of SQL Server 2005 and when certain T-SQL functions came in?
A company who is looking for a database design expert is going to focus on that and may not care about a DBA’s expertise in HA/DR options.
How do you ensure that you are adequately testing the competency of a DBA?
Interviews go two ways though, and companies can be more wrong that right (it happens).
The question here is when you run into a company with the wrong beliefs, what do you do?
If the company interviews you and says that you’re wrong in saying that
TRUNCATE TABLE can be rolled back, what do you do?
If they say that index rebuilds doesn’t update index statistics on the columns in the index, again what do you do?
What do you do if they won’t listen, if they won’t look at any examples, if they won’t see reason when given proof to the contrary?
How do you deal with a company that is incorrect in their basic assumptions and unwilling to learn?
…and that we, as DBAs, have to learn to change with it otherwise we’ll get left behind.
A concern for me is that maybe the way that we interview DBAs isn’t right, and that it needs to change or it, too, will get left behind.
Unfortunately, like most things, I don’t have the answer yet…
I’m learning though…
That’s T-SQL Tuesday #92, not 92 lessons learned the hard way
I make no effort to hide the fact that I am not the biggest fan of GUIs, and I’ve been fortunate enough to turn that dislike into an admiration of command line tools. I said “an admiration” not that I’m any good at them yet! I have been fortunate enough to provide a function for dbatools.io (have you helped them out yet?) but just goes to show that anyone can help out, regardless of skill level.
In case you ever wondered where this dislike came from, let me tell you a hypothetical story about…my friend that I used to work with.
Now my friend wasn’t a DBA then, he wasn’t even an Accidental DBA, he was more a “that guy is good with databases, ask him” kind of guy. In short, my friend knew just enough to be dangerous without knowing that he could be.
…which was either today or 5 years ago, depending on what version of SQL Server you’re running but we’ll say 5 years ago, my friend was working as a SQL Support Engineer for a software provider.
The provider didn’t handle backups, that was all taken care of by 3rd parties. In case something went wrong, these 3rd parties provided the backups and either the software provider, or the in-house I.T. would restore them. (FYI, I’m very cautious of 3rd party backup tools as well).
…and eventually a bug was discovered in the release that could have potentially had some data impact (no particular reason to say Friday, I just don’t think you should release on one).
So a plan was made to request a 2 week old backup and to compare the current data against the current production database.
My friend goes to the Object Explorer, opens the “Databases” node, and sees that there is two databases there; Live ([TheEarlyBird]) and a disused copy of Live ([TheEarlyBird2]) that is a day old and can be overwritten.
Not knowing any better, my friend right-clicks the old copy, clicks “Tasks”, then “Restore”, then “Database…”, and a lovely GUI pops up.
Now my friend doesn’t know any better, he thinks that the GUI is here to help him and in most of the cases it is. What my friend failed to realize is that there is a difference between helping him and doing the work for him…
The 3rd party backup file has not yet been retrieved but that stops my friend not! This is a urgent case so my friend forges ahead, thinking that he can get everything set up and ready then all he would have to do is select the file when it was made available.
So my friend goes back to the General Page, clicks the “Device” radio button, and selects the backup file…
…and clicks “OK” to start the restore!
My friend encounters errors:
Exclusive access could not be obtained because the database is in use.
This confuses my friend as this is a disused copy of the database, the only person who should be on it is himself.
Does my friend go and maybe check out
EXEC sp_Who2; to see who else could be on this database? No, remember that my friend knows just enough to be dangerous. My friend goes back to “Tasks”, “Restores”, “Databases”, goes to the Options Page and checks the box labelled “Close existing connections to destination database”….
With that, my friend clicks the “OK” to restore the database and continues on his merry way…the dumb fool that he is.
…have this little “optimization” technique where it looks at the name on the database backup file and matches up with the database name.
Now what this actually meant was the moment that my friend clicked the “Device” button, all his work was gone and his destination database reverted to the Live Database!
The first time my friend clicked “OK” to restore wasn’t a problem since there were connections and the Live database wasn’t affected.
But then my friend goes back and clicks “Close existing connections to destination database”…just enough knowledge to be dangerous…
So in summary, what my friend had done was kick every single connection off of Live and then effectively wiped 2 weeks worth of data.
Thank goodness for tail-log backups!
They give you the option to script out the configurations you have chosen. If my friend had chosen to script out the restore, rather then clicking “OK” to run it, maybe he would have caught this mistake when reviewing it – rather than overwriting the Live database with 2 week old data and spending a weekend in the office with 3 colleagues fixing it.
Plus if you ever want to ensure that you know something, try and script it out from scratch.
There is this saying that…
…there is no such thing as failure
I guess it’s a personal experience but I say that it is thanks to “my friend” that I was able to do 2 side-by-side
WITH STOPAT database restores today.
Oh and FYI SQL Server 2012 Enterprise Core Mainstream Support ends today.
I’m very upset about that… 😐
It’s T-SQL Tuesday time! T-SQL Tuesday time! T-SQL Tuesday, T-SQL Tuesday, T-SQL Tuesday and a baseball bat!
Ahem…sorry about that…
Technology has changed a lot in the past years, especially with cloud/globalization/automation. What an impact has this had on your job? Do you feel endangered? Or do you have more exciting features/toys to work with? Do you embrace the change and learn new skills? Or do you hide in your cubicle and fear the robot uprising?
Now my knowledge of these subjects are somewhat limited; I have yet to research the cloud beyond a rudimentary “poking it with a stick” so I leave that topic to people like Arun Sirpal ( blog | twitter ) and others.
Globalization is another new topic for me. I consider it to be along the lines of being able to fill in positions with people all around the globe (but I’m probably wrong with this). There is a veritable plethora of people online that are excelling at this, so I leave it up to you to seek them out.
Automation…well I have some dealings of this, so I was considering this to be my topic. Then I realised that I could write about something along the same lines. Out-sourcing, but not in the conventional sense. More like…
There’s never enough hours in the day for everything I need to do!
How many times have we heard a complaint similar to that? Especially now-a-days when DBAs are tasked to look after more and more servers and instances. I cannot remember the last time I heard of a DBA taking care of servers in the single digits.
The work of the DBA keeps increasing but the amount of time that we have remains the same. How do we combat this? How do we make it so we are not sprinting just to keep up?
The only answer I have to this problem is this.
Don’t try to re-invent the wheel…let someone else do it.
The SQL community has been hard at work creating awesome content for everyone to use.
A short list of the top of my head include:
This doesn’t include the tools available from companies like
And have you seen some of the scripts created by individual members of the SQL community?
Hmm, maybe this won’t be as short a list as I thought…
You know what this blog post needs? More links!
Can I stop now? NO!!! MOAR LINKS!!!
And with Microsoft’s advancements with SQL Server and PowerShell, porting it to work with Linux and with Azure, it is like Microsoft are actively trying to get you to automate; actively trying to make your life easier!
So yes, technology has changed but we, as a SQL community, have worked so that you can use these tools, automate away the drudgery, and embrace these changes.
As long as you embrace this “inline out-sourcing” that we have.
Now I could wax lyrical about all the different aspects of the community and how, with these resources, you shouldn’t need to re-invent the wheel but I’m not going to.
These people have poured hours of effort into these scripts all for you to use. They have poured their heart, souls, and RAM into these scripts just so that they can help out the community.
I would ask a favour of people reading this post though. For all the time, effort, and sacrifice that these people have put in…
It means more than you know.
That is more of a blurb than a title…and this is more an apology than a blog post…
Now, truth be told, I wasn’t planning on participating in this one and this wasn’t because of not having a WTF moment, but rather having too many of them. However, reading through most of the entries, I see a vast majority of them are about moments well in the past and caused by other parties.
This is not the case for me. My WTF moment happened recently and the culprit was … myself.
Sorry Kennie 😦
A request came in from our Developers about a slow performing query and my Senior DBA identifies an index that can be safely modified to improve this ones performance.
So a Maintenance Window was set and it fell to me, in my role of Junior DBA, to create a SQL Agent Job to create this index.
No worries so far right?
I create a once-off SQL Agent Job to create this index, scheduled it appropriately, and I’m off on my merry way for the weekend.
I come in on Monday morning, check my email, and I see an alert in my inbox about my job as well as an email from my Senior DBA; He’s not angry…WTF?
My whole job had failed!
Unable to connect to SQL Server ‘(local)’. The step failed.
He is not angry as he has seen this error message before, has dealt with it before, and sees it as a case of “well you’ve seen it now, investigate it and you won’t fall for it again”.
A quick investigation later pointed to this in the Error Log the moment before the SQL Agent Job Step was supposed to run:
 ODBC Error: 0, Connecting to a mirrored SQL Server instance using the MultiSubnetFailover connection option is not supported. [SQLSTATE IMH01]
Long sub-story short (i.e. Google-fu was involved), the main reason that this failed is that the SQL Agent Job Step has been configured to use a Database that is currently a mirrored one.
And SQL Agent does not like when you try to start off a step in a database that is mirrored.
So the solution for me was to set the Job Step property ‘Database’ to a non-mirrored database (preferred: [master]), then include a “USE [<mirrored database>]” in the ‘Command’ property.
Knowing what to do now, and having identified another maintenance window for the next morning, I make the required changes to the job step and continue on with my day.
I come in on Tuesday morning, check my email, and I see an alert in my inbox about my job as well as an email from my Senior DBA; He’s angry…WTF?
My final job step had failed!
CREATE INDEX failed because the following SET options have incorrect settings: ‘QUOTED_IDENTIFIER’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations. [SQLSTATE 42000] (Error 1934). The step failed
Now I’m angry too since I count these failures as personal and I don’t like failing, so I get cracking on the investigation.
Straight away, that error message doesn’t help my mood.
I’m not indexing a view!
I’m not including computed columns!
It’s not a filtered index!
The columns are not xml data types, or spatial operations!
And nowhere, nowhere am I using double quotes to justify needing to set QUOTED_IDENTIFIER on!
SO WTF SQL SERVER, WHY ARE YOU GIVING ME THESE ERRORS???
SET QUOTED_IDENTIFIER must be ON when you are creating or changing indexes on computed columns or indexed views. If SET QUOTED_IDENTIFIER is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
I’ve talked about stupid error message before… but in my current mood I wail, beat my breast, and stamp my feet!
The error message above was not complaining about the index I was creating, it was complaining about indexes already on the table!
In my case, we had filtered indexes already created on the table and, as such, every single index on this table from then on requires
SET QUOTED_IDENTIFIER ON.
USE [TEST]; SET QUOTED_IDENTIFIER ON; CREATE ...
No, not this time.
Luckily the Senior DBA had come in while the maintenance window was still running and manually ran the create index script.
He wasn’t angry that my job step failed. He was angry that my first job step succeeded!
Are you going “WTF? Why is he angry about that?” Let me enlighten you…
Remember at the start of this blog post I said that he had identified an index that could be safely modified?
Well, on Monday, in my haste to fix my broken job I had focused too much and thought too granular.
My second job step that created the index had failed, but my first job step, the one that dropped the original index had succeeded.
There’s not really much more to say on this. In my rush to fix a broken job, I created a stupid scenario that was luckily caught by the Senior DBA.
Yeah…so thought it would be a nice, little counter-example to the other posts out there about third parties coming along and wrecking havoc, and the DBAs swooping in to save the day.
I could make up excuses and say that, as a Junior DBA, I’m expected to make mistakes but I’m not going to.
It should be the aspiration of every Junior DBA to strive to improve and move upwards, and one of the key aspects of this is responsibility.
You should be responsible for looking after the data, looking after the jobs, and looking after the business.
And if all else fails, you should be responsible for your actions.
I have been properly chastised by my Senior and am still chastising myself for this. It’s been a long week so far…
… and it’s only Tuesday…wtf?
Read this and don’t bother me for permissions again…I’m talking to the user, not you.
Matt Gordon ( b | t ) has the unfortunate luck of hosting this month’s T-SQL Tuesday. I say “unfortunate” because I don’t think many of the other halfs out there are going to be happy with the last minute writing and checking that bloggers are going to do for this.
He makes up for it though by asking us to think of the positive aspects of SQL Server. What old problems can we fix with the shiny new toys available to us since SQL Server 2014+.
Have you ever had to give someone uniform access to all the tables in all the databases in your instance of SQL Server, prior to SQL Server 2014?
Now seeing as this is a blog post designed to highlight the benefits of the new way and shine a light on the ugliness of the old way. I’m going to show you how I would do this with the GUI.
Here we have a lovely little 2014 Developer edition of SQL Server, with a couple of random databases.
We need to create a LOGIN for this instance so that we can grant the needed permission to it. So opening the “Security” node on the instance and right-clicking the “Logins” node gives us a nice, little option called “New Login…”. Click that!
Great! Step 1 of the multi-step process complete! Now for the first database which in our case is “DMODisabled”.
A LOGIN needs a USER, right? Well let’s just open up the nodes “Databases | DMODisabled | Security” and right-click “Users”. Now let’s create a “New User…”
We have a choice now.
We could connect to every table, one-by-one, and grant SELECT permissions to our User. Or we could be a bit more clever and grant SELECT permissions on the different schemas themselves.
It’s up to you to be honest. We have to ask ourselves the question though:
What happens if a new database, table, or schema is created?
Answer: We have to do this again for them.
Also remember all this was for one single database! By my count, I still have four more databases to do after this.
I’m going to do none of those things because it’s already tedious, I’m not the fondest of the GUI, and it is making the urge to add coffee to my coffee increase.
I’m not sure how you would script this out but for me it involved cursors. Cursors that spanned all the databases and all the tables/schemas in the database. Not the best of solutions also because if any new databases, tables, or schemas got created after it was ran, and the need to connect to those was there as well, then you had to run the damn thing again!
Much like Bonnie Tyler, you need a hero! Well I present to you two heroes!
Now I’ve dropped my User and Login so I can show you the magic of these guys.
Lets create the Login again, T-SQL-style:
CREATE LOGIN [GiveMeALLTheData] WITH PASSWORD = 'UpUpDownDownLeftRightLeftRightBAStart' , CHECK_POLICY = OFF; GO
Now what server permission does it have?
SELECT sp.name , perm.permission_name , perm.state_desc FROM sys.server_principals AS sp INNER JOIN sys.server_permissions AS perm ON perm.grantee_principal_id = sp.principal_id WHERE sp.name = 'GiveMeALLTheData'; GO
Where’s our table at?
USE [DMODisabled]; GO SELECT * FROM dbo.HideAndSeek; GO
Can’t access the databases huh? We’ll see about that. Enter our first hero!
USE [master]; GO GRANT CONNECT ANY DATABASE TO [GiveMeALLTheData]; GO
Let’s re-run our SELECT statement again and…
The table tries to stop us? Too bad. Our second hero steps up to the plate.
USE [master]; GO GRANT SELECT ALL USER SECURABLES TO [GiveMeALLTheData]; GO
You can now access any database on your instance. Go ahead, give it a try out and see, but you may ask yourself:
Wait, what about new databases, new schemas, or new tables, like you said?
Fear not! Any new databases have their permissions automatically added, and new schemas have their permissions automatically added, and as for new tables; you better believe that they have their permissions automatically added.
Much better than a script that is needed to be run repeatedly, right?
Keep the shiny new toys coming I say! 🙂