Is this thing on?

Been a while since I wrote anything. Let’s see where we are at in life.
Last year I made it to only Nashville/Rochester/Columbus SQL Saturdays.

This year I helped run Cleveland. I’m doing Raleigh on April 14th. I also applied for Columbus, Memphis, and Oslo. I’m really hoping to go to Norway I think that’d be awesome.

Also of note is I stepped down as the VP of the Ohio North SQL Server User Group. I helped run the the group in some capacity for almost 3 years, really about 2.7 so it was time for new blood.  I’m glad we conned Eric B into doing it.

I will still be around, I have no plans on vanishing. I love SQL Family life too much.

Did someone run a trace?

Today I had to check to see if someone ran Profiler and a SQL trace in production without approval. A total no no.

If you query the default trace for that timeframe you will see under application name that SQLProfiler shows up with some numbers after it, I think it was a GUID. But basically there is a way to tell if someone is up in your database tracing your stuff.

Luckily in my case no databases were traced so we didn’t have to do more legwork. But this is a good to know if you ever need to know if it’s there, if you can find the files for the default trace from the timeframe there abouts, then you’ve got a shot to check.

Connect

connect.microsoft.com

This is a website you can go to to make a public bug listing for many MS products. Recently I made one about SQL. Here’s the scenario

Install SQL 14 SP1 or SP2 media from MSDN as an FCI with reFS disks.

The installer basically does not let you get pask picking clustered disks and it screams at you that a bunch of mount points are invalid. This was annoying. We got around it by using one NTFS base drive (e:) and then moving databases and hanging mount points and adding reFS drive letters after the fact. I opened a connect item here:

https://connect.microsoft.com/SQLServer/feedback/details/3135241/sql-14-fci-installer-issues-with-refs-volumes

 

Microsoft responded and said it’d be fixed in an upcoming SP for SQL 14 (sp3 I presume). Pretty cool stuff. Connect can be an interesting place to browse, you can upvote bugs for other people to gain visibility or see if other people are running into issues that you have found.

Had to automate a restore

I needed to automate a restore of a database from prod down to test. This seemed simple enough, except the database backs up to 12 files, I can’t use a linked server, and the time is included in the filename of the backup. All these parts made it more fun than I expected. Fun. Right.

 

So here’s what I used to do the job figured that if you can’t us your own tools or dbatools then this might help you out. I built a table in our admin database with one field to house the name of the file on disk. To get yesterday’s backup today I run the following code:

 

–1

TRUNCATE TABLE admin..tablenameforyourdbnameondiskgoeshere

 

–2 run some POSH in a SQL agent job

 

set-location c:\

$wtf

$wtf = gci \\pathtoyourbackupsgoeshere \ -name -include dbname_* |  select -last 1

$wtf

$query

$query= “INSERT INTO tablenameforyourdbnameondiskgoeshere VALUES (‘” + $wtf + “‘)”

invoke-sqlcmd -serverinstance “instancenamegoeshere” -database admin $query

EXIT

 

The gci (get childitem) line finds you the last database backup, one of the 12 parts, and that way I have the full name and date and time to insert into a table to use later. There might be a better posh way to do this but I suck at posh. I run this in a SQL agent job under the powershell drop down item. You have to set the C:\ path to get it to find your stuff because that’s what Stackoverflow told me to do.

 

–3 I needed the left command because in my restore I had 12 lines for each file, and just left those hardcoded

 

declare @file varchar(50)

select @file=db from admin..[restoreworksforreporting]

set @file = LEFT(@file,20)

select @file

 

DECLARE @sql VARCHAR(max) = ‘restore statement goes here, had to turn the name of the db into a parm so I had to use dynamic SQL’ so here’s a brief example of that

‘RESTORE database dbname from DISK = ”\\pathofrestore\’+@file+’File1.bak”,

exec (@sql)

I had 12 files to restore from, the DB is 1.7TB on disk and 7 TB expanded and 23 odd files.  Hopefully this helps someone else out.

Server C: running low on space?

Try this quick tip to regain some space. We use SCCM to deploy Windows updates and they can fill your C: over the life of the box. We use these two methods to reclaim space:

Empty all the recycle bins for all users

For Windows 7 or Server 2008 enter this command:

rd /s c:\$Recycle.Bin

For Windows XP, Vista, or Server 2003 enter this command:

rd /s c:\recycler

 

Method two open Control Panel > Configuration Manager > Cache Tab > Configure Settings > Lower that slider to less space to less less room for updates to stick around > hit delete files > Check ‘delete persisted cache content’ > hit yes

This week I gained back about 16 gig of space on a C: that had 0 MB remaining.

A GUI for Distributed Replay?

What do we want? GUIs ! Mostly because I’m a lazy fellow who likes clicking the mouse instead of learning shell commands and automating all things. That habit needs to change, but that’s a kick in the rear for another day. What I can tell you is that with SQL 2016 Microsoft really wants you to upgrade. They want to get you on the latest and greatest. With SQL 2016 they’re making that even easier with the release of a new tool titled “Database Experimentation Assistant.” Experiments? That sounds exciting to me. I hope it is to you.

So what does this shiny new application do? It allows you to GUI -ify the the process of running Distributed Replay. Yes, you will need to have installed it and set it up first.  Not only that but it will do some analysis for you on your work loads and tell you if your newer SQL server will be slower than your old current production box. Let’s be honest, that Monday morning after a weekend upgrade coming into work and you phone rings with the application isn’t performing. That is not what anyone wants. Starting in SQL 2012 you could use distributed replay to capture your current workload and play it back on your new instance. If you did lots of work you could get pre and post stats to see if your queries would fly or run slower. With this tool it will do all the calls for you and make your life easier. The tool is new as of PASS 2016 so I recommend you give it a try at work. We did and found a bunch of merge statements were going to run slower. That’s real world scenarios we took to our development team to update before we went live. As an aside I told them to dump merge, but they didn’t want to listen ( =) ). That most likely stopped someones phone calls that Monday morning after the upgrade. That’s results you can take to the bank.

This year my SQL Sat session is on setting up Distributed Replay and running this new Database Experimentation Assistant. If you go to Nashville SQLSat site from 2016 you should find my slide deck with lots of setup pictures and steps. I’d also be happy to work with anyone if they need help getting this working. Just find me on twitter or email me (contact info is in my bio.)

I’ll be trying to praise the gospel of this setup at a couple different SQL Saturdays this year and hopefully (but probably not) PASS 17.

https://blogs.msdn.microsoft.com/datamigration/2016/10/25/database-experimentation-assistant-faq/

https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/22/technical-preview-database-experimentation-assistant/

Above are a couple links to get you started straight from Microsoft. I hope this primer has peaked in an interest and you’ll go give this a try at work. Remember, there’s always a performance overhead for doing this in production so please start with test or development first to get comfortable with the setup and it’s results.

 

 

 

 

 

 

2017 SQL Saturdays

This year will be a bit low on SQL Saturdays. I think my goal will be to hit one every other month, but I’ll just miss it.

I’ve already presented at Nashville back in January. I debuted my new session on Distributed Replay. It went okay for my first time doing it. Not a lot of interest in it though, only 7 people showed up. That’s okay, one of the fellow presenters who joined me said he got everything he needed out of it which really made me happy to hear.

I then gave that session again to my user group. I demo failed hard. That’s okay though. Live and learn.

I hope to give it in Rochester, Columbus, and Pittsburgh this year. I doubt I’ll attend PASS Summit so that’ll be the end of the line for me. I’ll end up one short of my goal. Maybe I can swing by Louisville. That would add a sixth and a place I haven’t been yet.

Got a EMC DataDomain?

Are you restores slow?

Recently we had to restore our 5TB prod database to rebuild an AG node. Here’s some things we learned along the way to hopefully help you speed up your restores.

Backup your VLDB to multiple files. We found 12 to be the sweet spot in our setup. Make sure you’re going to 10gig NICs on both ends of the transfer.

In terms of folder directories on that thing we learned to go wide and small. Let me explain. Our setup is this:
\\datadomain\sql\sqlbu\\\Full or Tlog

Under Tlog we break out by day of the week. We’ve found too many files in a folder cause the DD to choke.

We also learned that you can overrun the DD with work. We could kick off all our backups to an Isilon and be fine. The DD can’t handle that, we’d get “network name not found.” The DD is serving up CIFS and it appears to be not too hard to overrun. To avoid this we use CMS to stagger our backups into groups at different times. Tlogs are generally okay running at top/bottom of the hour. Probably because they’re small and quick.

There’s a registry key you can modify to change the timeout (I don’t have the link) for timeout for CIFS in Windows but we weren’t comfortable with that idea.

That’s all I’ve got for now, if we think or find more I’ll update this posting.

EDIT/UPDATE Here’s some EMC KBs that may help:
KB181958_Poor_Restore_Speed
KB182042_DDR_Performance_TroubleshootingGuide_PDF
KB182188_CIFS_and_Networking_Tuning
KB182268_TroubleShooting_CIFS_Performance

Second Edit: Our OS for DataDomain is 5.5.2.2 as of this writing with all the above

Quick SSRS report builder tip

I started building a report in SSRS. I had never done this before. I learned two things over time that I wish I had known from the start and I want to share them below.

1) Use stored procs for everything
2) When you hit preview, if your report doesn’t look right, maybe you added some rows to your get stored proc and you don’t see the new data in VS but you do in SSMS, hit this thing https://imgur.com/a/66ri8 (note the red arrow pointing at the refresh button)

Apparently there’s a cached version of your report in that preview pane. So if you’re like me and you’re wondering why in the world are you not pulling new data. That will probably fix things up for you.

Learning that only took like a day and a half. I wish I was kidding.

Madison, Columbus, and Indy SQL Sat Recap

I’ve been a bit busy this year. I kept submitting to places and for some oddball reason I kept getting picked. I was selected for Atlanta as well but had to remove myself because of work. I had to also remove myself pre selection from Maine thanks to work. Boo work.

So let’s recap shall we?
Madison was awesome. Thanks to Jess and the team there for a fantastic event. I loved that they had 75 minute sessions. That’s definitely my speed for my build a clusters talk as I usually find myself running out of room. It is an 8 hour drive from Cleveland. I got my wife to come with me and she watched me present. It was nice to spend a weekend together even if she had to go back to the room early because she didn’t feel well during the afterparty. I also found a new favorite beer, go find yourself some spotted cow.

http://www.newglarusbrewing.com/index.cfm/general/index

Columbus marked the one year anniversary of my speaking start. So this event will always hold a special place in my heart. It’s a quick drive and Peter Shore and David Maxwell who run it are great guys and do a really great job. My session this year went well, except I had a total brain fart and somehow forgot to go through the installer to show how you span an instance across the nodes of your cluster. I threw the slides up on the SQL Sat site for this talk and added that in at the end. So I hopefully caught myself there. Otherwise it was another lovely day in Cbus.

I just came home today from Indianapolis for SQL Sat Indy. I have family about 10 minutes from the venue so I was able come in Thursday night and have dinner with them. It was nice catching up. I also spent Friday in Adam Machanic’s Query Tuning precon. Well worth your monies. All of them. I mean the dude is awesome to watch and will teach you ways to manhandle the optimizer. He wrote Whoisactive and giving him monies for that precon was thanks enough for that tool alone. As for the day Hope Foley and her team did a fantastic job. The event ran smooth, lunch was delicious, and a few fellow speakers came to my session. That actually made me pretty nervous. I think I did a good job in my talk and hopefully I taught some people some useful clustering facts. A special thanks to Drew, Andy, Rob, and Ben for attending.

Speaking of my Clustering talk, there’s no way I can keep doing it in an hour. It’s just way too much stuff, so I’ve got to come up with some new material for next year. Well, speaking of next year my wife and I are going to have a baby in March. I won’t be traveling as near as much as I did in this last year. I’ve been very grateful for the opportunity to travel and present. I love the entire SQLSAT and SQLFAMILY concepts and what they stand for. I never thought I’d be a publited speaker or enjoy it. Now I can’t get enough. I’m already really sad in thinking about how I can’t get out next year to travel around near as much due baby number two. Don’t get me wrong, I’m excited for our second child but I knew this would happen. I’m just bummed out about it. This will give me a year or so to hopefully come up with a few talks and keep blogging. I’ll be at the Cleveland and Cbus events but that’s probably it.

I also will probably step down from helping run the Cleveland RUG. I’m secretary and speaker coordinator right now. I’m glad to help and book things. I know I won’t have the the time to devote to this sort of stuff when I’m a parent of a newborn and 4 year old. When they get a bit older I’ll be able to get back into it. At least that’s what I’m telling myself. PS ONSSUG is booking for March 2017 and onward so get in contact with me if you want a month. 😉

As for future talks I have two that I want to try and create. One is on Distributed Replay and the other on Storage Spaces direct. I think the Replay one is going to happen for work as I might need to learn it, now ish. So hopefully I can give that in Columbus and my RUG next July.

One last thing. Before my session an attendee said to me they had saw someone tweeting about me and then they had decided to follow me and come see my talk. It was the most surreal speaking moment to me. I have a pretty low self value of myself if I’m honest. I’m just a dude who had to learn clustering and now enjoys going around and teaching and helping other people with it. I never in my life would have thought someone would come watch this session let alone say they saw me on twitter and decided to show up. I’m really blown away by this.

I think that I am a poster child case for imposter syndrome. Infact at the start of my talk when I mention my title, senior sql dba, I say that’s a load of crap. I just break a lot of stuff and have to google how to fix it. I could probably go on this track but that would lead to another 1000 words and to be honest my living room is a mess and I just want to go chill right now and place some xbox before bed.

If you made it this far, thanks for reading and talk to you soon.