October 2006 - Posts

How to invalidate your recovery in one easy step.

As I've found this "problem" on more than one occasion I thought I'd share the details, hopefully you won't find this -  but  who knows.

Scenario - daily backups of databases, regular transaction log backups.

Issue - what appears to be too much free space in the ldf file, or the perception the ldf file is too large.

Solution - after database backup, backup log with truncate only and shrink the database to recapture free space.

Result - NTFS fragmentation , internal fragmentation - oh and you now can't recover your transaction log backups if an error occurs.

Posted by GrumpyOldDBA with no comments
Filed under:

Not thinking it through

I was investigating the failure of a backup job , written and produced by another party, and just thought I'd share this.

The job backs up more than one database in a cursor loop, if the job fails it retries three times. So far this sounds a good solution?

Well not really, in this case of failure the database sizes were diverse, from a few hundred mb to tens of gb, the backup job failed beacuse there was insufficient disk space to backup the largest database. 

What happens is that you then get multiple backups of the smaller databases until the four iterations complete or the smaller backups have filled the available disk space - in worst case the process would repeat until all available disk space was filled, as the backup drive also stored hourly transaction log backups for the databases the potential to bring the whole server to a stop was quite high.

So the point I'm making is that retry on fail might not actually be the best solution, it isn't in this case. 

Posted by GrumpyOldDBA with no comments
Filed under:

Invalid Descriptor Index error solved ( Replication )

This has been driving me mad ( ok really grumpy !! ) it's such a vague error - After installing replication the distribution agent failed with this error message.

So thanks to Raymond Mak I found out that the sp had not updated the template mdf and ldf file for the distribution database - this meant the version of the distribution was at rtm vs the sp4 level of the user databases - this gives the error. I copied the files from another install to test but will have to re-apply the service pack.

So big worry now, if the sp doesn't always get it right, what else does it miss? The advice for this error had always been to re-apply the servcie pack ( 2 and 3 ) so it appears to be a consistant problem, although possibly isolated. So should a first stage step when having sql problems be to always re-apply the last service pack or rollup?

Posted by GrumpyOldDBA with no comments
Filed under:

One

I was browsing through a publication today, it's aimed largely at big corporates and "decision makers" - so read " not technical people". Anyway it was talking about storage performance, a subject close to my heart!

Anyway the basis was the throughput that could be achieved with a new raid 5 array controller, amongst the claims were 200% faster than the previous card, and 380Mb/sec writes with raid 5 and 800Mb/sec read from a raid 5. Sounds good doesn't it? add a card and double performance? Well it's not quite as it seems, the product is a sata controller and the new card doubles the ports ( drives ) available - so 8 drives vs 4 drives yup about double performance ( this wasn't actually clearly stated ) - of course you've actually doubled the risk of failure of the array by increasing the disks in raid 5 (  each additional disk to a  raid 5 increases the risk of failure ). So throughput - again meaningless,  no specified data block size or sequential or random i/o.  SQL Server would make 8kb read/writes in worst case random i/o - the av scssi 10k drive supports about  120 i/o per second ( ref: sql server 2k perf tuning - ms press )  so my calc gives 7 x 120 x 8kb which is around 6Mb/sec - hmmm. - well it isn't even like that for writes as raid 5 cuts available i/o by 75% so we'd get a round 2 x 120 x 8kb which is just under 2Mb/sec.  The article figures actually indicate only a 50% ( ish ) write degredation vs reads for raid 5  ( oh this is supposed to be achieved with caching I think from the write up )  check out the perf tuning guide for write degredation for raid 5.

And no this wasn't an advert it was a case study. Hmmm - file under grumpy!!!

Posted by GrumpyOldDBA with no comments
Filed under:

Secondary Index Quick Fix

Sometimes it is necessary to take a blanket approach to tuning when initially faced with a poorly performing production system, for example just adding lots of memory!

The rationale behind this is that it is more risky to modify code than the database and or hardware and "quick fixes" are often the order of the day .. so .. this script will generate a create index script for any column ending ID ( likely to be a foreign key so used in joins etc. ) which doesn't have an index already. You might like to try it on your databases  just to see < grin >

-- Set to required database
use xxxxx
go
-- I've used a # temp table so the contents can be saved or viewed
-- if required
create table #test2(ObjName sysname,ColName sysname,ObjId int,ColId int)
--
insert into #test2(ObjName,ColName,ObjId,ColId)
select obj.name, col.name, obj.id,col.colid
from dbo.sysobjects obj join dbo.syscolumns col on obj.id=col.id
where col.name like '%id'
and obj.xtype='U'
-- remove where column is already indexed
delete from #test2 where convert(varchar,objid)+':'+convert(varchar,colid) in
( select convert(varchar,id)+':'+convert(varchar,colid) from dbo.sysindexkeys )
-- put QA into text output
-- cut and paste output to create indexes
select 'create index idx_'+ObjName+'_'+colname+' on '+objname+'('+colname+')'+char(10)+'go'+char(10)
from #test2
-- once you're happy remove temp table
-- drop table #test2

Posted by GrumpyOldDBA with no comments
Filed under: