You may or may not be aware that one of the major new features for SQL 2008 is filtered indexes. Now I'm not quite sure why more hasn't been made of this as it is without doubt a seriously significant new feature.
I suppose it depends upon your application(s) on how significant this feature is and as with any new feature there exists the possibility of seriously causing problems if you get it wrong - or more significantly don't document your database. Everyone documents their databases surely ?????
Let us condsider simple issue for us in the UK - you have a table which stores personnel data including National Insurance numbers, each national insurance number must be unique but not all new employees know their NI number so the column must allow nulls, and multiple nulls at that.
Now basic DBA stuff here, if I ever interview you this is a question I'll probably ask. How do we index the National Insurance number column?
A unique index/constraint will only allow one null which is not acceptable and no constraint is also unacceptable.
Well up to now to resolve this problem you create an indexed view on the NI column --
create view vw_NIcheckas
select NIColumn from dbo.personnel where NIColumn is not null
now create a unique clustered index on the view. This allows you to have multiple nulls but unique NI numbers.
With SQL Server 2008 you can now create the following index:- create unique index idx_personnel_NINumber on dbo.personnel(NIColumn) where NIColumn is not null;
the index only covers the non null values and thus you can have multiple nulls and unique NI numbers. Pretty cool eh?
Well there's more, to coin a phrase, and I'll post example code and query plans later to show this working in a practial manner.
If you're working with large tables and your application is multinational then index rebuilds can be a bit of a nightmare. I've worked where individual secondary indexes range between 10 and 20Gb, this was a moderate sized database and I'm well aware there will be bigger and bigger databases out there,
but this is based upon my personal experience.
Should your database be used 24 hours a day when do you rebuild? whatever you do you'll block one group of users and if your table/index is being constantly used the on-line option is unlikely to be of much practical use.
So filtered indexes to the rescue : tada tadt tada da! ( a fanfare )
Taking our 20Gb index we now create a number of smaller filtered indexes, we filter by country or region say,
|create index idx_mytable_USA on dbo.mytable(c1,c2,c3,c4,c5) where country='USA';|
|create index idx_mytable_UK on dbo.mytable(c1,c2,c3,c4,c5) where country='UK';|
|create index idx_mytable_China on dbo.mytable(c1,c2,c3,c4,c5) where country='China';|
|create index idx_mytable_Antigua on dbo.mytable(c1,c2,c3,c4,c5) where country='Antigua';|
|create index idx_mytable_Kuwait on dbo.mytable(c1,c2,c3,c4,c5) where country='Kuwait';|
Immediate gains are the fact that the individual indexes will be much smaller, a divide and conqueur approach, and if we rebuild the index for China it won't block users in the USA.
The filtered column(s) don't even have to be in the actual index.
Coupled with or without partitioned tables this will be a big step forward for 7 x 24 applications and for databases with large tables ( and indexes ) where the window for such maintenance is short, It's likely to take less time ( and resource ) to rebuild ten 1GB indexes than one 10GB index.
I'll hopefully post working examples next week.