November 2008 - Posts

Testing Times 3

( I was on holiday for three weeks so sorry for the delays )

In my previous posts I briefly mentioned the task in establishing a method to benchmark a SQL Server new build or migration.
This hasn't been an easy task by any means and I am under pressure not to make any posts based around non disclosure agreements and confidentiality statements in emails. Now I would never name names anyway unless I had agreement and I'm not on a campaign to discredit any storage provider or data centre provider, but I am interested in the technical performance differences between systems and storage and how as a DBA I can measure this and hopefully share information such that as a group DBAs can get the best performance for their clients or employers and understand where performance issues may arise and how to identify them.
I have posts to come on this area of testing which will consider the use of IOmeter and if it has any relevance to SQL Server and what we can get out of Perfmon. I will also talk about storage in a very general manner, because this is the "sensitive" area.

Just to give some background on my experience here; I have two certifications in fibre channel architecture, no it doesn't make me anything but a casual observer but the courses did give me an insight into this aspect of storage using networking. I've worked with storage arrays since sql server 6.5, I had a sql server 6.5 running on multiple storage arrays using raid 10 around 10 years ago and I've worked with external storage and SANs ever since. In fact my first experience with a RDBMS was Ingres running on Solaris, with a WFWG 3.1 front end, and we were using striped drives then - I always imagined everybody ran like this as this was my first introduction. Anyway I digress:- Performance has always been important to me and it's in this area that I usually work, generally hardware is a minor part of tuning, the old addage used to be that it was 90% code and 10% hardware; however, it seems that this may not be the case quite so much any more.

So what else have I been looking at? Multiple files for the database, multiple filegroups for the database, multiple drives. The fibre channel architecture , the storage guys -- and it's all a bit interesting.

So here's my basic set of tests, a sample set of results and the scripts so you can do this yourself.

http://www.grumpyolddba.co.uk/Infrastructure/sqlbenchmark.htm

So what do I need to say about this? I've run all the tests on an enterprise server, nominally let's say it is a 4 x quad core with 32gb of ram. All the scripts were run from a TS session on the actual server to avoid any issues with networking. All the scripts were run manually from a Query window.
OK you can't get the index stats out of SQL 2000, but the tests are designed to run on SQL 2000/2005/2008. You compare the run times to get comparision.
I created a database in simple recovery with a single 80GB mdf and a 10GB ldf. SQL settings are all out of the box.
I cleared the proc and buffer cache before each test run, but no stats updates. For the index stats I took the database on and off line to clear the dmvs so I could just show what each test did in terms of io.

You may ask why I should want to publish this and why ndas have been mentioned. Well having used these tests to compare performance of essentially SQL Server working with a different storage setup the tests were dismissed as not being relevant to a production OLTP database and not a valid way to compare performance, see Testing Times 1 to get a feeling for where this is leading.

Anyway, more to come and please please I would be grateful for any feedback on whether you think I have a reasonable set of tests. Feel free to email me direct - there's an email address on www.grumpyolddba.co.uk - if you don't want to leave a comment.