Is this thing on?

Been a while since I wrote anything. Let’s see where we are at in life.
Last year I made it to only Nashville/Rochester/Columbus SQL Saturdays.

This year I helped run Cleveland. I’m doing Raleigh on April 14th. I also applied for Columbus, Memphis, and Oslo. I’m really hoping to go to Norway I think that’d be awesome.

Also of note is I stepped down as the VP of the Ohio North SQL Server User Group. I helped run the the group in some capacity for almost 3 years, really about 2.7 so it was time for new blood.  I’m glad we conned Eric B into doing it.

I will still be around, I have no plans on vanishing. I love SQL Family life too much.

Did someone run a trace?

Today I had to check to see if someone ran Profiler and a SQL trace in production without approval. A total no no.

If you query the default trace for that timeframe you will see under application name that SQLProfiler shows up with some numbers after it, I think it was a GUID. But basically there is a way to tell if someone is up in your database tracing your stuff.

Luckily in my case no databases were traced so we didn’t have to do more legwork. But this is a good to know if you ever need to know if it’s there, if you can find the files for the default trace from the timeframe there abouts, then you’ve got a shot to check.

Connect

connect.microsoft.com

This is a website you can go to to make a public bug listing for many MS products. Recently I made one about SQL. Here’s the scenario

Install SQL 14 SP1 or SP2 media from MSDN as an FCI with reFS disks.

The installer basically does not let you get pask picking clustered disks and it screams at you that a bunch of mount points are invalid. This was annoying. We got around it by using one NTFS base drive (e:) and then moving databases and hanging mount points and adding reFS drive letters after the fact. I opened a connect item here:

https://connect.microsoft.com/SQLServer/feedback/details/3135241/sql-14-fci-installer-issues-with-refs-volumes

 

Microsoft responded and said it’d be fixed in an upcoming SP for SQL 14 (sp3 I presume). Pretty cool stuff. Connect can be an interesting place to browse, you can upvote bugs for other people to gain visibility or see if other people are running into issues that you have found.

Had to automate a restore

I needed to automate a restore of a database from prod down to test. This seemed simple enough, except the database backs up to 12 files, I can’t use a linked server, and the time is included in the filename of the backup. All these parts made it more fun than I expected. Fun. Right.

 

So here’s what I used to do the job figured that if you can’t us your own tools or dbatools then this might help you out. I built a table in our admin database with one field to house the name of the file on disk. To get yesterday’s backup today I run the following code:

 

–1

TRUNCATE TABLE admin..tablenameforyourdbnameondiskgoeshere

 

–2 run some POSH in a SQL agent job

 

set-location c:\

$wtf

$wtf = gci \\pathtoyourbackupsgoeshere \ -name -include dbname_* |  select -last 1

$wtf

$query

$query= “INSERT INTO tablenameforyourdbnameondiskgoeshere VALUES (‘” + $wtf + “‘)”

invoke-sqlcmd -serverinstance “instancenamegoeshere” -database admin $query

EXIT

 

The gci (get childitem) line finds you the last database backup, one of the 12 parts, and that way I have the full name and date and time to insert into a table to use later. There might be a better posh way to do this but I suck at posh. I run this in a SQL agent job under the powershell drop down item. You have to set the C:\ path to get it to find your stuff because that’s what Stackoverflow told me to do.

 

–3 I needed the left command because in my restore I had 12 lines for each file, and just left those hardcoded

 

declare @file varchar(50)

select @file=db from admin..[restoreworksforreporting]

set @file = LEFT(@file,20)

select @file

 

DECLARE @sql VARCHAR(max) = ‘restore statement goes here, had to turn the name of the db into a parm so I had to use dynamic SQL’ so here’s a brief example of that

‘RESTORE database dbname from DISK = ”\\pathofrestore\’+@file+’File1.bak”,

exec (@sql)

I had 12 files to restore from, the DB is 1.7TB on disk and 7 TB expanded and 23 odd files.  Hopefully this helps someone else out.

Server C: running low on space?

Try this quick tip to regain some space. We use SCCM to deploy Windows updates and they can fill your C: over the life of the box. We use these two methods to reclaim space:

Empty all the recycle bins for all users

For Windows 7 or Server 2008 enter this command:

rd /s c:\$Recycle.Bin

For Windows XP, Vista, or Server 2003 enter this command:

rd /s c:\recycler

 

Method two open Control Panel > Configuration Manager > Cache Tab > Configure Settings > Lower that slider to less space to less less room for updates to stick around > hit delete files > Check ‘delete persisted cache content’ > hit yes

This week I gained back about 16 gig of space on a C: that had 0 MB remaining.

A GUI for Distributed Replay?

What do we want? GUIs ! Mostly because I’m a lazy fellow who likes clicking the mouse instead of learning shell commands and automating all things. That habit needs to change, but that’s a kick in the rear for another day. What I can tell you is that with SQL 2016 Microsoft really wants you to upgrade. They want to get you on the latest and greatest. With SQL 2016 they’re making that even easier with the release of a new tool titled “Database Experimentation Assistant.” Experiments? That sounds exciting to me. I hope it is to you.

So what does this shiny new application do? It allows you to GUI -ify the the process of running Distributed Replay. Yes, you will need to have installed it and set it up first.  Not only that but it will do some analysis for you on your work loads and tell you if your newer SQL server will be slower than your old current production box. Let’s be honest, that Monday morning after a weekend upgrade coming into work and you phone rings with the application isn’t performing. That is not what anyone wants. Starting in SQL 2012 you could use distributed replay to capture your current workload and play it back on your new instance. If you did lots of work you could get pre and post stats to see if your queries would fly or run slower. With this tool it will do all the calls for you and make your life easier. The tool is new as of PASS 2016 so I recommend you give it a try at work. We did and found a bunch of merge statements were going to run slower. That’s real world scenarios we took to our development team to update before we went live. As an aside I told them to dump merge, but they didn’t want to listen ( =) ). That most likely stopped someones phone calls that Monday morning after the upgrade. That’s results you can take to the bank.

This year my SQL Sat session is on setting up Distributed Replay and running this new Database Experimentation Assistant. If you go to Nashville SQLSat site from 2016 you should find my slide deck with lots of setup pictures and steps. I’d also be happy to work with anyone if they need help getting this working. Just find me on twitter or email me (contact info is in my bio.)

I’ll be trying to praise the gospel of this setup at a couple different SQL Saturdays this year and hopefully (but probably not) PASS 17.

https://blogs.msdn.microsoft.com/datamigration/2016/10/25/database-experimentation-assistant-faq/

https://blogs.technet.microsoft.com/dataplatforminsider/2016/11/22/technical-preview-database-experimentation-assistant/

Above are a couple links to get you started straight from Microsoft. I hope this primer has peaked in an interest and you’ll go give this a try at work. Remember, there’s always a performance overhead for doing this in production so please start with test or development first to get comfortable with the setup and it’s results.

 

 

 

 

 

 

2017 SQL Saturdays

This year will be a bit low on SQL Saturdays. I think my goal will be to hit one every other month, but I’ll just miss it.

I’ve already presented at Nashville back in January. I debuted my new session on Distributed Replay. It went okay for my first time doing it. Not a lot of interest in it though, only 7 people showed up. That’s okay, one of the fellow presenters who joined me said he got everything he needed out of it which really made me happy to hear.

I then gave that session again to my user group. I demo failed hard. That’s okay though. Live and learn.

I hope to give it in Rochester, Columbus, and Pittsburgh this year. I doubt I’ll attend PASS Summit so that’ll be the end of the line for me. I’ll end up one short of my goal. Maybe I can swing by Louisville. That would add a sixth and a place I haven’t been yet.