Analysing Indexes - reducing scans.

The whole subject of database/application tuning is sometimes akin to a black art, it's pretty easy to find your worst 20 whatever but actually seeking to reduce operational overhead can be slightly more tricky.
If you ever read through my analysing indexes post you'll know I have a number of ways of seeking out ways to tune the database.
This is a slightly different slant on one of those which produced an interesting side effect.
We all know that except for very small tables avoiding scans is key part of tuning, here's a quick way to achive this
As a word of warning dmvs generally store cumulative data and may not always store everything, subject to available resource.

As part of our software releases we try to switch cluster nodes so SQL Server gets restarted which clears down the dmvs, there are a number of advantages to clearing the
dmvs other than just index analysis., anyway I just wait a few days from the release and run this script which will find any tables/indexes which are scanned but do not get seeks.

-- find indexes with scans>100 and no seeks
selectobject_name([object_id]),index_id,user_seeks,user_scans,user_lookups,user_updates,last_user_seek fromsys.dm_db_index_usage_stats
wheredatabase_id =db_id()
user_scans>100 anduser_seeks=0
order byuser_scans desc;

The interesting side effect was to discover that the 3rd most scanned table didn't have any data and furthermore was not used in the application, but there was obviously a process calling this table very frequently.

It was then just a case of finding the queries which scanned the tables and sort out an efficient index - it's really as simple as that.

My 7th most scanned table was averaging 80k scans a day at a current cost of 24 io. A suitable index dropped this to 7 io and the index stats showed no further table scans. It might not sound much in the overall scheme of things but saving 1.3 million io per day always helps.
The whole thing about a table scan , or clustered index scan, is that these return the entire table to buffer cache, eliminating scans may well free up more buffer cache and improve usage.
Quite often smaller tables used this way may not suffer frequent changes, these may make good candidates for page compression thus reducing io and storage further.
Circumstances will obviously vary from database to database and your table needs to be of some size, on our current database I have around 60 tables which scan and never seek, indexes 1 or 0, only a handful of secondary indexes appeared in my list.

As always use this information as a tool to assist not a rule to enforce.

Published Wednesday, May 19, 2010 8:44 PM by GrumpyOldDBA
Filed under: , ,


# re: Analysing Indexes - reducing scans.

Friday, May 28, 2010 2:06 PM by DavidB

Great post Grumpy. The often overlooked but most necessary detail of tuning. It is the little things that yield the greatest results when added together over time.

Thanks for posting.

# Given the choice 8 out of 10 Optimisers prefer.........

Friday, June 4, 2010 12:09 PM by Grumpy Old DBA

Did you know that included columns do not partake in the uniqueness of a unique index? ( see below )

# re: Analysing Indexes - reducing scans.

Monday, June 7, 2010 6:20 PM by ChrisAVWood


Your script will certainly help but I noticed it can also call out system stored procs. Any chance you can improve it to only look at user stored procs?



# An interesting annecdote

Wednesday, July 28, 2010 1:47 PM by Grumpy Old DBA

A while ago I blogged about using the dmvs to identify lookups and scans on tables http://sqlblogcasts