February 2008 - Posts

Only once every four years thank goodness!

I never cease to be amazed by what I discover within IT, mostly these days I'm more disappointed than anything else, I'm not a lover of the let's bash Microsoft about security patches or we should all use Linux / Open source as in the latter I'm sure if the most popular o/s was linux it would be full of exploits.

However I was faintly amused to discover there's a bug in SQL 2008 which doesn't let it run on Feb 29th. Common guys how can this be, I've just wasted some considerable time trying to understand why my SQL won't run, it's on a stand alone laptop not connected to the internet - shock horror -  you're just not going to do your reputation much good with this and on top of the will there be a service pack 3 for sql 2005 , let's hope it's not like sp2 if it is!

You'd just expect better !

dbcc dropcleanbuffers - or maybe not

As a consequence of another action I was prompted to want to clear the buffer cache – or to be more precise I tried to clear the buffer cache.


So stepping back, what happens if you install more memory than the combined size of your databases? Most would probably suggest that it’s a waste of time, however what I’ve found is that tempdb makes use of this memory, and it was my initial investigation of memory usage which prompted all this, in fact I was quite surprised as to how much cache tempdb would use.


With SQL 2005 it’s quite easy to see which database is using buffer cache and how much, which is where the dropcleanbuffers comes in,  I have a x64 box with 8Gb memory allocated to SQL Server,  we’re involved with quite a bit of monitoring on this server which is testing some critical business processes, I noted that just over 4.5Gb of buffer cache was allocated to Tempdb, essentially most of the data cache, so I decided to clear the cache and watch utilisation based upon running server processes, in effect I wished to see which of the many databases or processes was making such heavy use of tempdb.


So here’s the crunch, try as I can I am unable to flush the dirty pages used by tempdb. Tony kindly reminded me of his blog entry about tempdb 




which I’d only re-read a few days earlier as a post concerning use of solid state disks for tempdb, you can check out some info on solid state disks here




A quick check through TechNet and msdn suggest that you should checkpoint your database before clearing the buffers , a test actually showed that the checkpoint doesn’t actually happen at once and dropcleanbuffers after a checkpoint doesn’t necessarily clear the buffers as you’d quite think.

There is a parameter which can be added to the checkpoint command, the number of seconds in which you wish the checkpoint to occur e.g. checkpoint 10 , SQL Server will then attempt to flush pages within 10 seconds.

If you don’t specify a value   When this parameter is omitted, SQL Server 2005 automatically adjusts the checkpoint duration to minimize the performance impact on database applications.”  


Ref: http://msdn2.microsoft.com/en-us/library/ms188748.aspx


Using this option and then dropcleanbuffers certainly cleared out the user databases but did not clear tempdb.


Tony tells me you can’t checkpoint tempdb, basically there’s little point, I agree but in saying that I’d still like to remove the buffers used by tempdb, I have no running processes so there’s no user data within tempdb, well technically.

But there is a point about tempdb and checkpoint, how often after tempdb has become bloated have you had to issue a checkpoint to allow a shrink to occur? Certainly for myself and my fellow DBA we are agreed that we have both issued checkpoint against tempdb.

Tony also says that there’s little point in writing tempdb pages to disk as writing the pages to disk only aids recovery, and tempdb is recreated at startup,  so as my buffer cache does fill what happens to the tempdb dirty pages, I can see after a day that tempdb is now only using  1.5Gb  so 3Gb has been flushed or cleared.


Tony further says is what I’m really asking is to limit what resource tempdb can use – and I guess he’s right.


So if you’re using dropcleanbuffers within your testing, it doesn’t actually flush your user database pages unless you’ve actually forced a checkpoint and you may or may not be affected by usage of tempdb.


Here’s the code I used to extract the data, Idera SQL Diagnostic Manager confirms  the values/sizes of memory used, the version of Diagnostic Manager being currently used doesn’t split the buffer cache by database.


select db_name(database_id) as 'Database Name',

convert(numeric(8,2),count(page_id)/128.0) as Mb

from sys.dm_os_buffer_descriptors with (nolock)

where database_id !=32767 

group by database_id

order by count(page_id) desc;


this query counts dirty pages:


select count(page_id) as 'dirty pages'

from sys.dm_os_buffer_descriptors

where is_modified =1;


Analysing Indexes Summary

·         I’ve put the work I’ve done on using the dmvs to analyse indexes onto my web site – the links to the pages are underneath. Trying to post large documents to the blog is very difficult, and probably not what a blog was intended for!

·         I need to do more work on the analysis of the operational stats and I will publish this in due course. The basic principles that apply to the analysis of sys.dm_db_index_usage_stats can be applied to sys.dm_db_index_operational_stats .

·         There have been a number of posts and articles indicating how easy it is to find unused indexes using the dmvs – what I would say is to be very careful, for a production system you must capture dmv data into local tables and remember that the data is cumulative.

·         What I’ve found in large production systems is that you cannot guarantee that data on every index is held in the dmvs at all times, I cannot reproduce this in test environments, but in some ways this is understandable, see section 20

·         Make sure that you consider very very carefully your work cycle, remember that if you have month or quarter ends then you need to capture data over these periods too.

·         In my tests a full update statistics ( not sp_updatestats ) was most likely to force index data into the views, in the databases I was monitoring I was unable to rebuild every index every day and where systems are close to 7 x 24 and databases getting  to terabyte size you can perhaps understand why.

·         In my case I started the analysis because the system I was attempting to tune made extensive use of client side cursors, had no stored procedures and was very busy.

·         Part 3 finishes the work on the operational stats and there are some useful queries for you to use in the whole set – laterly I expanded some of the analysis to cover partitioned tables. It was pointed out to me that I’d not allowed for schema’s in my work or xml indexes, I will try to remedy this in due course.

I’ve published the pages as mhtml but if you’d like word versions then drop me an email.

·         Analysing Indexes Part 1  - Introduction to the index dmvs

·         Analysing Indexes part 2 – Collecting index data, examining statistics, empty tables

·         Analysing Indexes part 3 – Expanding upon parts 2 and 3, looking at index size

·         Analysing Indexes part 4 – This considers the sizing of indexes and possible selection of clustered index. Did you know for example that an index on a nullable column adds an overhead of 3 bytes per index entry?

viewing object locks in SQL 2000

It's old stuff but I just thought I might post this code just in case someone else finds it useful:-

I was looking at a process which was blocking its self, as this was a data extract ( only reads ) allegedly there shouldn't have been blocking, the simplest way was to examine the locks which were current, this query shows the locks per object in a nice readable output - if your database is busy then you might want to try filtering by spid as the output can be large. Run it against the database you're interested in as the query is database specific.


-- run this against the database to monitor


select convert (int, sli.req_spid) As Spid,

db_name(sli.rsc_dbid) As dbid,

object_name(sli.rsc_objid) As 'Object Name',

sli.rsc_indid As 'Index Id',

substring (v1.name, 1, 4) As 'Lock Type',

substring (sli.rsc_text, 1, 16) as Resource,

substring (v3.name, 1, 8) As 'Lock Mode',

substring (v2.name, 1, 5) As 'Lock Status'


master.dbo.syslockinfo sli with (nolock) join master.dbo.spt_values v1 on sli.rsc_type = v1.number

join master.dbo.spt_values v2 on sli.req_status = v2.number

join master.dbo.spt_values v3 on sli.req_mode + 1 = v3.number


v1.type = 'LR'

and v2.type = 'LS'

and v3.type = 'L'

and rsc_dbid = db_id() and rsc_objid >0

order by sli.req_spid



Posted by GrumpyOldDBA with 1 comment(s)
Filed under:

Custom Reports for the DBA without SSRS

Creating Custom Reports for the DBA

( You can find this and other information at http://www.grumpyolddba.co.uk/ )

1.        You don’t need  SQL Server Reporting Services to do this

2.       You don’t need IIS installed

3.       You do need SQL Server 2005 Service Pack 2 on your client tools e.g. management studio

4.       You don’t need  Service Pack 2 on your Servers.

5.       User Databases must be in 9.0 mode for database reports.

6.       You can put the rdl files on your workstation or the servers.

7.       You don’t need BIDS.

8.       Reports can also be deployed with SSRS

9.       Doesn’t work with SQL 2000


·         I’m not totally sure that everyone is familiar with the addition of custom reports to Management Studio with service pack 2 for sql server 2005.

o    I’ll just make a quick mention that you should always apply service packs and patches to client tools, all versions, even if you don’t have the database engine installed.

·         I’ve been running my own custom reports against SQL Server since 6.5 publishing them to html pages, Reporting Services gave me an alternative method and now Service Pack 2 expands upon this further.

o    Here’s a couple of screen shots of what was possible prior to reporting services

o    These are static html pages generated from the web assistant wizard.

 backup status from web assistant




































































































Here’s a hyperlink to a custom report which shows details of the procedure cache  -   Procedure Cache Report

o    I’ve saved these as mhtml files rather than screen shots.


·         As it stands all this works locally, you either have the rdl files on your workstation and run them against every server, or you install them on each server. I have not attempted to create any enterprise solution, that would have to be done with SSRS.

·         To make the reports easy to use I have where possible placed the code within the reports, this is intentional in an attempt to make them totally portable.

·        Some of the reports do require code on your server and it is assumed that you have a database exclusively for DBA use on each server, failing that you would need to deploy to master or msdb, I don’t normally like adding this type of code to application databases, or system databases.


Finding Custom Reports on the menu



 finding custom reports in SSMS



















The following Reports do not need any code on the Server, typically run these against master or msdb


·         Database Backup and Transaction Log Backup Status

o    This will colour according to how long since the last backup

o    it assumes all databases in full recovery should have transaction log backups

o    You may wish to alter the setting according to your log backup frequency

·         Workers , Signal Waits and cpu counters

o    This is essentially ums stats

o    Use this to check on numbers of workers and and worker waits

·         Top 20 indexes  in cache ( this is database specific so run against the database you’re interested in )

o    It’s sometimes handy to know which tables and indexes are actually in cache

·         Procedure Cache Status

o    This was very useful when I was tracking memory problems

·         Active Running Processes on the Server

o    This captures what is actually running when you open this report

o    It also captures the entire text of the running commands

o    It’s excellent for capturing that query from hell

o    May produce a big result set

·         Plans in Cache

o    OK this is messy and produces a lot of output

o    You may want to restrict output further

o    Hopefully the report explains it self.

·         Blocked Users

o    This was for specific use to show who was blocking who when blocking occurred

o    Useful if you want to deploy a SSRS report to show basic blocking chains for non DBA support staff or you want to restrict users running their own queries against a production server

·         Buffer Cache Usage

o    Check out how your data cache is being used

o    With many servers now deploying lots of memory it can be interesting to see how it’s actually used

o    You may be surprised too!

·         User Sessions – similar to sp_who2

o    This was developed as a SSRS report to keep support staff off the production systems

o    ( stops endless sp_who2 commands )

o    You’ll probably want to customise the colour scheme


·          So right click and save the zip files to your PC, extract  the rdl files below to a folder on your workstation and open them in SSMS – it’s that easy!


o    BackupStatus

o    blockedusers

o    BufferCache

o    CachePlans

o    Procedure Cache Status

o    RunningProcesses

o    shortsysprocesses

o    Top 20 Indexes in cache

o    Workers


·         You don’t need to load them into BIDs, just browse to them from Custom reports and they will run.


·         Here are examples of each report, except  Plans in Cache (CachePlans1.rdl ) which was too difficult to manipulate to produce a suitable example.


o    Database Backup and Transaction Log Backup Status

o    Workers , Signal Waits and cpu counters

o    Top 20 indexes  in cache

o    Procedure Cache Status

o    Active Running Processes on the Server

o    Blocked Users

o    Buffer Cache Usage

o    User Sessions


You will need BIDS if you wish to modify the reports ,deploy them to SSRS or change the logo


So what is this all about then?

·         To support or tune a database server you must have the ability to collect  statistical performance data, perfmon counters for example. 

·         You must be able to establish a baseline to which performance can be compared, otherwise how do you know if your server is busier than normal or quieter than normal.

·         You must have regular reports of performance and reports that assist in diagnosing problems of any sort.

·         You should also build Application Dashboards that track your particular applications, I will include a fictitious example.

·         These reports are intended for the DBA therefore they assume you have sysadmin/dbo rights to the databases.

·         You should have a DBA admin database on each server, this will hold your custom code.



I have a number of other reports which I will publish, some of which link to other statistical gathering processes


( A quick word about the posts - I work mainly in word rather than html so most of my posts and web pages are output from word , sorry, I do use Expression Web for grumpyolddba.co.uk. but again many of the pages still come from word. ) 





© www.grumpyolddba.co.uk  February 2008