jump to navigation

Our SQL Server “Denali” experiences – Part 2 2011/05/12

Posted by DB Dave in Uncategorized.
Tags: , , ,
comments closed

Where has the week gone? I finally got round to continuing my SQL Denali testing today – with very promising results in fact!

To give you an idea of the environment (which I brushed over in the previous post) and a little more context, we have a number of SQL servers which sit between our web servers, and our main core databases. These act as “read-only” search, browse and caching servers for the Trade Me website. They each hold the same data, have the same configuration, and run on the same hardware, which makes it relatively easy to add new servers (i.e. scale horizontally) when necessary.

Add to the mix some clever networking kit, and a few in-house developed bits & pieces, and we can finely control the amount (and type) of load on each individual server. Very handy for testing new hardware and software, service packs, etc.

Wow! This is pretty awesome!So, just after lunch time this afternoon, I very gingerly put 1% search load on the Denali test box. To cut a long story short, over the course of the next few hours I increased load until we were running 50% of all search load on the 1 Denali test server. Trust me when I say that this is pretty damn awesome. 🙂

The Denali test server is exactly the same hardware-wise as the existing SQL 2005 servers. We’ve tried on a few occasions to upgrade to SQL 2008, but a pretty well documented full-text related performance bottleneck meant we could never get more than 5-10% search load on a single box. The issue was with the full-text merge process causing excessive locking on the underlying indexed table – which in our environment is constantly changing, as well as being constantly queried – this meant we’ve just stuck with 2005 thus far.

Although everything went very well, there were a few weird things I ran into while getting the server ready; firstly I needed to shrink a tempdb data file that had blown out, but the shrinkfile operation never completed (and actually timed out once or twice after around 5 minutes which I’ve never come across before). Because I couldn’t shrink the data file I thought I’d just restart the SQL service, which would recreate the tempdb at its original (i.e. smaller) size, but this wasn’t the case. The initial size was correctly set, but after a restart of the SQL service the tempdb file size was unchanged. I’ll need to investigate this further next week, so will post any new findings.

That’s all for this week’s testing.
Cheers
DB Dave

Our SQL Server “Denali” experiences – Part 1 2011/05/09

Posted by DB Dave in Uncategorized.
Tags: , , ,
comments closed

I got around to installing SQL Server 2011 (Denali) on a production server last week, and fortunately it’s been pretty slick so far. And in case you’re wondering why it’s on a production server instead of a dev or test box, this particular SQL server is one of many that we use as a “read-only” layer between the webs and our core databases, so it’s actually pretty easy for us to add a new production server and siphon a small amount of load onto it. We can then slowly ramp up the amount of production load to see how it performs. If it turns to custard, we can quickly remove load from it as well.

To start with I had to subscribe the database to a few of our core SQL 2005 transactional replication publications, which was (fortunately) totally uneventful.

We use full-text quite extensively, and going from SQL 2005 to Denali meant that I needed to create a new data file and filegroup for the full-text index to use. I didn’t have to create a new physical file for the full-text filegroup, but we put our full-text catalogues on RAM drives, so I needed to be able to move it. Once that was done, setting up full-text itself wasn’t very different from SQL 2005 (other than needing to specify a filegroup destination rather than a directory path).

Once replication latency had come down, and the full-text catalogue was fully populated (which didn’t take long at all), I needed to synchronise the database objects with another production server to make sure there were no differences such as missing tables and indexes, different version of stored procedures, etc. This was the biggest hurdle so far, since our current comparison tool (Red Gate SQL Compare) doesn’t yet support Denali. I tried a few alternative products, and eventually found a tool called “SQL Delta” which worked! It’s only an evaluation copy though, so for the next 14 days I’ll be okay… after that I’ll need to figure something else out. 🙂

That’s as far as I’ve managed to get so far, mostly because actual work keeps getting in the way of the cool stuff (like testing new hardware, new versions of SQL, etc.), so I’ll follow up next week with another post on how the first round of testing goes.

Cheers
DB Dave