May 2010 - Posts

Analysing Indexes - reducing scans.

The whole subject of database/application tuning is sometimes akin to a black art, it's pretty easy to find your worst 20 whatever but actually seeking to reduce operational overhead can be slightly more tricky.
If you ever read through my analysing indexes post you'll know I have a number of ways of seeking out ways to tune the database.
This is a slightly different slant on one of those which produced an interesting side effect.
We all know that except for very small tables avoiding scans is key part of tuning, here's a quick way to achive this
As a word of warning dmvs generally store cumulative data and may not always store everything, subject to available resource.

As part of our software releases we try to switch cluster nodes so SQL Server gets restarted which clears down the dmvs, there are a number of advantages to clearing the
dmvs other than just index analysis., anyway I just wait a few days from the release and run this script which will find any tables/indexes which are scanned but do not get seeks.

-- find indexes with scans>100 and no seeks
selectobject_name([object_id]),index_id,user_seeks,user_scans,user_lookups,user_updates,last_user_seek fromsys.dm_db_index_usage_stats
wheredatabase_id =db_id()
user_scans>100 anduser_seeks=0
order byuser_scans desc;

The interesting side effect was to discover that the 3rd most scanned table didn't have any data and furthermore was not used in the application, but there was obviously a process calling this table very frequently.

It was then just a case of finding the queries which scanned the tables and sort out an efficient index - it's really as simple as that.

My 7th most scanned table was averaging 80k scans a day at a current cost of 24 io. A suitable index dropped this to 7 io and the index stats showed no further table scans. It might not sound much in the overall scheme of things but saving 1.3 million io per day always helps.
The whole thing about a table scan , or clustered index scan, is that these return the entire table to buffer cache, eliminating scans may well free up more buffer cache and improve usage.
Quite often smaller tables used this way may not suffer frequent changes, these may make good candidates for page compression thus reducing io and storage further.
Circumstances will obviously vary from database to database and your table needs to be of some size, on our current database I have around 60 tables which scan and never seek, indexes 1 or 0, only a handful of secondary indexes appeared in my list.

As always use this information as a tool to assist not a rule to enforce.

Posted by GrumpyOldDBA with 4 comment(s)
Filed under: , ,

Looking for a code Plugin !!


CREATE TABLE [dbo].[MSPaymentForExtraction]([MSPaymentID] [int] NOT NULL IDENTITY (1,1) NOT FOR REPLICATION

Posted by GrumpyOldDBA with 1 comment(s)

Just when you thought it was safe..........

One of my duties is to handle software releases to our Production system, as is my want I always run my eye down any schema changes, this new object stood out for a number of reasons.

I may add this to my interview questions:



CREATE TABLE [dbo].[MSPaymentForExtraction]([MSPaymentID] [int] NOT NULL IDENTITY (1,1) NOT FOR REPLICATION


.csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }
Posted by GrumpyOldDBA with 2 comment(s)

Did I back that file up ?

As part of some diagnostics I was wishing to check if a particular file had been locked by a backup process whilst I was trying to copy it.

So I put a request into the DataCentre for a list of files backed up and time when they were backed up, seems reasonable to me ? However the response was somewhat strange, apparently I can only find out if a file was backed up if I ask to have it restored, there is no facility ( so I'm told ) to produce a list of backed up files.

It's good to know that in a recovery situation I won't know if that critical database backup was taken to tape until I ask to have it restored!

On a more practical point the reality of restoring a production system from a previous backup is highly unlikely, in fact it's never happened to me, mainly because I usually work with log shipped servers and most of the systems I support are 7 x 24 or close to whereby going back in time would be a logistical nightmare.

My personal view on Production SQL Backups is not to stream to Tape and to ship the logs/backups to at least two other locations, with three copies in three locations I see no reason to ever invoke a tape backup strategy, notwithstanding the classic " will it actually restore " and/or " was it actually backed up " ( see original reason for post ).

I'll not go into the problems with a database where it took over 24 hours to backup the database to Tape, even a diff took best part of 18 hours - totally the wrong solution!

Posted by GrumpyOldDBA with 19 comment(s)

w WARN: Thread pool pressure. Using current thread for a work item.

The skill set needed by a DBA can be quite diverse at times and a run in with SSRS 2005 probably illustrates the point quite well. I don't have skills in IIS although I was responsible for the design and deployment of an online mortage application site some years ago.I had to get hands on with IIS5, firewalls, intrusion systems, ISA Server, dmzs, NAT, IP and lots of other acronyms so I have an understanding of these things but never had to do anything other than set up and configure IIS - no tuning.

So back to SSRS 2005, it uses IIS and we have a pair of servers which can serve up over 120k reports a day, and I don't mean a simple one page reports here, this is a billing system. The SSRS servers are x64 and have a substantial amount of grunt.
As is often the case a maybe slightly unrelated production issue forced a review of the servers, the SQL side was fine but my colleague discovered a large number of the warnings about thread pool pressure.

" “w3wp!runningjobs!11!4/26/2010-11:16:37:: w WARN: Thread pool pressure. Using current thread for a work item.”

Some searches brought back some information suggesting increasing the number of worker threads but other than a description of what the warning meant not much else.
There was one post which suggested one process per socket, that would equate to 4 on our servers, but that didn't seem much change especially as one of our servers was set to 1 and the other to 5.
I suggested we should set this to at least 1 per core which would give us 16 and then monitor carefully. My colleague had done some research about queues and put up some performance counters to watch this alongside process explorer to actually see how many processes fired up.
Fortunately I have extensive monitoring on the SSRS Servers so I can see a vast number of real time stats - also displayed in SSRS, but that's another post coming eventually - along with the historical data for the last couple of years, granularity is 5 secs and 5 minutes depending on the data set.

So what happened? 

The immediate reaction was that 16 w3wp processes started up and stayed active with little impact on cpu or memory.
The warnings went away, so that was good and our reports output was now good, previously we'd been having timeouts ( Essentially we raise invoices as pdf documents, at peak times this is the bulk of the output )
Performance appears considerably enhanced, the number of active connections from SSRS to the production server has quadrupled,  viewing the collected perfmon counters and comparing to previous periods we can see we now have throughput and utilisation at +400% .
Execution times would also appear to reduced,most reports completing within a second.

The Question?
Nowhere could we find any recommendations on how to calculate how many worker threads might be required - answers on a postcode to the usual address.......
It doesn't seem that any SSRS books cover IIS from the view of performance tuning.
SSRS 2008 does away with IIS, and yes we're planning an upgrade but it will take time, where do we start with performance tuning the engine in SSRS 2008, I believe some part of the IIS engine is still present under the hood - you'll have to excuse my lack of knowledge here, I've installed and used SSRS 2008 but not been forced to apply any tuning.

The path to performance tuning SSRS was to scale out, separate the parts, but with minimal resource utilisation on our combined boxes this seemed somewhat strange, even now my x64 box only averages 10% cpu and is using barely 50% of available memory, in fact I can't find any signs of straining resource - makes me wonder.
I'd be very grateful of any feedback directing me to what I've been unable to find and advice on tuning the IIS behind SSRS.



There was one kb but this was a way back and we're a service pack and number of CUs forward.

The best explanation probably comes here