November 2006 - Posts

How much memory?

This is a question I would ask at interviews .. You have 12Gb of ram on your new SQL Server ..how do you get SQL Server to use it and how do you know if SQL Server is actually using the memory?

I've had some discussion on this matter with two reports raised claiming SQL Server was only using 200Mb of ram whereas it should be using much more.

http://msdn2.microsoft.com/en-us/library/aa196705(SQL.80).aspx   is where to look.

For my interview the answers were , enable awe in sql server, set maximum memory, check target memory in sysperfinfo ( there's also a perfmon counter but DBAs prefer SQL solutions! ) The wrong answer was do nothing and use Task Manager. ( with w2k you also need /PAE .. w2k3 should have this switch by default ).

select * from master.dbo.sysperfinfo where counter_name like '%server memory%'

should show something like this:-

object_name                                      counter_name                                    instance_name           cntr_value     cntr_type  

---------------------------------------------------------------------------------------------------------------------------------------------------

SQLServer:Memory Manager          Target Server Memory(KB)                                                  11444552      65536

SQLServer:Memory Manager          Total Server Memory (KB)                                                   11444552      65536

 

(2 row(s) affected)

It's also worth noting that awe can be pretty nasty if you don't limit max memory, I won't bother to repeat the kb article.  

 

 

 

Posted by GrumpyOldDBA with 1 comment(s)
Filed under:

Not thinking it through

I’m looking into performance problems on a box which is in a managed data centre, so there is the question of ownership of problems etc. One current issue has been an index rebuild job which has been regularly failing. The job is a simple cursor loop that applies a dbcc dbreindex to each table in turn, the database is in full recovery mode and tran logs get backed up hourly, it’s effectively a 7 x 24 system so options of changing database mode are not applicable. The server is also, in my view, short of disk space, we can only store one backup of the main database which is about 20gb in size, and the space available for transaction logs is also limited.

There were a number of server configuration changes to the out of the box settings, one which still exists is a setting to the Query Governor. Upon investigation it was discovered that the query governor setting was responsible for the failure of the dbcc dbreindex job.

Once this was resolved the reindex of the tables ran the transaction log drive out of space! I’m not going to relate how that was resolved, let’s just say the production database now has a different dbid.

The solution to this issue was to add a second loop inside the dbreindex loop to perform a transaction log backup every 10 or 20 tables, sounds fair, however, the log backup command is with an init and the time stamp for the file name is of the usual format  dbname_log_yyymmddhhmm.bak, problem is if the dbreindex  hits a batch of small tables the loop executes in less than a minute, this means the next log file overwrites the previous file and we have lost the continuity of the transaction logs and we can’t recover the database.

This was “tested” a number of times on the production server, including generating the logs without actually applying the reindex, 20 transaction log backups all with the same name in under a minute. As there were no before and after backups of the database, not enough disk space, the routine destroyed the recovery point until the next full backup – which didn’t occur until 36 hours later, putting us at risk of loss of a day’s business.

The Query Governor has been increased enough to presumably allow the job to run, although as we haven't had a complete run of the job data changes since may of course cause the job to fail next time it runs.

So the moral of this story is to beware changing server settings, you might not realise what is affected and think through clearly the logic of coding changes.

Posted by GrumpyOldDBA with 3 comment(s)
Filed under:

Transactional Replication by T-SQL

Haven't forgotten about this but we're still considering which way we want to implement replication. It's proving very interesting and I will publish my findings.
Posted by GrumpyOldDBA with no comments
Filed under:

P & O tools - missing script

Seems I lost the tail end of my posting - the blog timed out in the middle of writing the post, sorry.

This is the script to use against the procedure cache, it returns information for one database only and should be run in the master database

select bucketid,cacheobjtype,objtype,objid,name,refcounts,usecounts,syscacheobjects.uid,setopts,sqlbytes,sql
from master.dbo.syscacheobjects with (nolock) left outer join
mydatabase.dbo.sysobjects with (nolock)
on objid=id
where dbid=mydatabase_id 
order by objid

substitute your database name and id in the appropriate place in the code. There's a function on another web site which will turn the numeric setopts values back into the SET options, I was unable to fully validate this function.

Posted by GrumpyOldDBA with no comments
Filed under:

Creating a baseline for P & O part 3

The most important part of performance tuning is to be able to establish a baseline. Without a baseline and on-going statistics and trending information you just cannot make informed judgements concerning performance degredation or improvement or on future growth.

The three parts presented here allow for a pretty good collection of data for trending etc. I know the subject isn't particularly exciting and probably doesn't appeal to many!! I have no reason to believe that I won't carry this on to SQL Server 2005, true there's lots of management views, but you need hard hsitorical data for analysis if your server falls over.

I hope I've explained it in such a manner it can be understood, catch me at a user group meeting if you need any further clarification.

I have three standard sets of data I collect as a minimum, here is the third.

This is by far the largest post and contains the most amount of code. Again I’m not claiming the concepts behind this post are unique, far from it, but I’ve never seen anyone put all of it together as a complete solution.

So what am I talking about now? I collect perfmon stats and other counters in a sql server table which I also rollup. The very wide report shown below is an indication of what I collect.

For those not in the know master..sysperfinfo stores all the sql server perfmon counter data which is constantly updated, it’s not all in the easiest format to understand but nevertheless it’s all there. All I do is extract the information every hour, calculate average values for the hour and store them in a table. Every day I rollup the hourly data into daily averages which I store in separate tables.

Keeping a history of all this data is important for trending and prediction.

 

Disclaimer :-  The set of perfmon counters I chose were a personal choice, they are not “the best” , “the optimal” or anything like that. As I mentioned in part 2 of this series I have usually had the benefit of SQL Diagnostic Manager in the background.

 

In addition to the perfmon stats I also collect data from sp_monitor system procedure. It’s worth noting there is a bug in this procedure that means a value goes out of range .. I forget exactly which counter it is but if you get such an error all that is required is to recreate the procedure changing all integer variable to bigints.  Remember sp_monitor is a system procedure so you must use sp_ms_marksystemobject to return it to a system procedure.

 

I've had to split the report to allow me to snapshot it:-

 

 

 

 

 

 

 

 

The code presented isn’t  “generic” , for performance reasons, so you do need to make some modifications to the code for it to work, please note all the comments regarding these changes. Once again I use a DBA database, this time ServerAdmin, and I use scheduled jobs to store the data.

The raw data is stored every hour, I have two different summary tables, one which holds the average values for the entire day, the other which holds the average values for the key hours , say 08:00 to 17:00.

I have a job which creates the report shown above every night, I’m not publishing how I produce the reports, although I’ll try to remember to include the query,  as that would require yet another long blog. I hope to show my “reporting solution” one evening at a user group meeting.

( As a footnote I’ve been producing reports from SQL Server in this manner since 6.5 and this particular report for the past 3 or 4 years )

 

This is my documentation for this process, I’m hoping the formatting will convert correctly from word, I apologise for not sitting down and writing this in proper html but it’s a large document!

 

1.  Overview

  • The SQL Server Performance Monitor counters are stored internally within SQL Server in the table Sysperfinfo which is located within the master database. The purpose of this table is briefly documented within Books On Line.
    • The SQL Server Performance Counters are variously documented by Microsoft, a good source of reference is   “SQL Server 2000 Performance Tuning Technical  Reference”  – Microsoft Press -  ISBN 0-7356-1270-6
  • The available sql counters are shown below, a number of counters are generated for each database

 

 

Auto-Param Attempts/sec                      AWE lookup maps/sec                           AWE stolen maps/sec

AWE unmap calls/sec                             AWE unmap pages/sec                           AWE write maps/sec

Backup/Restore Throughput/sec            Batch Requests/sec                                 Bulk Copy Rows/sec

Bulk Copy Throughput/sec                    Cache Use Counts/sec                            Checkpoint pages/sec

DBCC Logical Scan Bytes/sec               Extent Deallocations/sec                        Extents Allocated/sec

Failed Auto-Params/sec                          Forwarded Records/sec                          Free list empty/sec

Free list requests/sec                               Free list stalls/sec                                    FreeSpace Page Fetches/sec

FreeSpace Scans/sec                               Full Scans/sec                                         Index Searches/sec

Latch Waits/sec                                       Lazy writes/sec                                       Lock Requests/sec

Lock Timeouts/sec                                  Lock Waits/sec                                       Log Bytes Flushed/sec

Log Cache Reads/sec                              Log Flush Waits/sec                               Log Flushes/sec

Logins/sec                                               Logouts/sec                                             Mixed page allocations/sec

Number of Deadlocks/sec                       Page Deallocations/sec                           Page lookups/sec

Page reads/sec                                         Page Splits/sec                                        Page writes/sec

Pages Allocated/sec                                 Probe Scans/sec                                       Range Scans/sec

Readahead pages/sec                               Safe Auto-Params/sec                            Scan Point Revalidations/sec

Shrink Data Movement Bytes/sec          Skipped Ghosted Records/sec                 SQL Compilations/sec

SQL Re-Compilations/sec                      Table Lock Escalations/sec                     Transactions/sec

Unsafe Auto-Params/sec                         Workfiles Created/sec                            Worktables Created/sec

 

  • A set of System Statistical Functions are also available which record data.

 

Function

Description ( from Books On Line )

@@CONNECTIONS

Returns the number of connections, or attempted connections, since SQL Server was last started. 

@@CPU_BUSY

Returns the time in milliseconds (based on the resolution of the system timer) that the CPU has spent working since SQL Server was last started.

@@IO_BUSY

Returns the time in milliseconds (based on the resolution of the system timer) that  SQL Server has spent performing input and output operations since it was last started.

@@PACK_RECEIVED

 

@@PACK_SENT

Returns the number of input packets read from the network by  SQL Server since last started.

Returns the number of output packets written to the network by  SQL Server since last started.

@@PACKET_ERRORS

Returns the number of network packet errors that have occurred on SQL Server connections since SQL Server was last started.

@@TOTAL_READ

 

@@TOTAL_WRITE

Returns the number of disk reads (not cache reads) by SQL Server since last started.

Returns the number of disk writes by SQL Server since last started.

@@TOTAL_ERRORS

Returns the number of disk read/write errors encountered by SQL Server since last started.

 

 

  • Technically the descriptions given in BOL are correct although in practice the descriptions are incomplete.
  • There is a system stored procedure  sp_monitor which works in conjunction with a system table spt_monitor , both can be found within the master database, the values returned by the statistical functions correctly return the values since the last execution of the stored procedure OR the start of the SQL Server service which ever was the most recent.
  • Knowing this fact we can extract the statistics for our own use by running sp_monitor at regular intervals.
  • Sp_monitor is a documented system procedure, examination of the procedure shows that it uses the table spt_monitor  ( which is not documented )
  • A typical result set from this table is shown below

lastrun

cpu_busy

io_busy

idle

pack_received

pack_sent

connections

pack_errors

total_read

total_write

total_errors

2006-01-25 09:01:03.703

22852929

735107

137159340

113903076

121546762

931224

532

9254818

64512368

0

 

 

2.     Overview of collection Process

 

·         To facilitate the collection process the functionality is handled by SQL Server in a user database.

·         Performance data is extracted with stored procedures

·         Performance data is stored in tables

·         Collection is achieved using the Agent Job Scheduler

 

A snapshot of the statistical data is taken at regular intervals ( every hour ), the average value(s) for that period is then calculated and written to a table.

The disadvantage of this approach is that it does not provide any granularity in as much as any extremes will be masked.

The advantage is that the number of data sets is manageable and the collection has a minimal impact upon the server, if at all.

Once a day the daily data is rolled up into an average for the day, this allows for simple day to day comparison.

Thus the data collected is adequate for viewing patterns and trends but less adequate for detailed analysis.

 

 

 

Overview of hourly statistic gathering job.

( The database name ServerDBA is used as an indicator to show object placement  )

 

 

3. Counters collected

 

Counter / Column

Value

Description

TheDate

Datetime – timestamp of table row entry

Date time. The values for the counters represent the average for the hour ending shown in this column

CacheHitRatio

%age value (average) for the time period

This is the total ratio for what is normally called the procedure cache.

Actually the cache holds plans for more than just procedures. ( see table 6 ) ( vii )

cpu_busy

Milliseconds total for the time period

( i )

This is a measure of cpu use and indicates the ms of use in the hour period

io_busy

Milliseconds total for the time period

( i )

This is a measure of i/o use and indicates the ms of use in the hour period

Logins

Number of connected users (average) for the time period

The number of persisted connection to the server. Each connection typically takes resource of around 48k memory. Connection memory cannot come from extended memory and thus unnecessary persisted connections can decrease the available memory for other processes.

Connections

Login attempts per second (average) for the time period

The rate of connections and connection attempts. Where connections to sql server are using pooling if this figure is high then pooling may not be configured correctly

PacketsReceived

Packets received per second (average) for the time period

Measure of network activity

PacketsSent

Packets sent per second (average) for the time period

Measure of network activity

PacketErrors

Total of packet errors for the time period

Should be nil

Reads

Reads per second (average) for the time period

Count of page reads executed by SQL Server. Indication of server activity.

Writes

Writes per second (average) for the time period

Count of page writes executed by SQL Server. Indication of server activity.

RWErrors

Total Number of read or write errors for the time period

Should be nil

TotalTransactions

Transactions per second (average) for the entire server for the time period

Measure of server activity

UserDB1_Transactions

Transactions per second (average) for the database for the time period

Measure of database activity

UserDB2_Transactions

Transactions per second (average) for the database for the time period

Measure of database activity

UserDB2_Transactions

Transactions per second (average) for the database for the time period

Measure of database activity

UserDB2_Transactions

Transactions per second (average) for the database for the time period

Measure of database activity

TempdbTransactions

Transactions per second (average) for the database for the time period

Measure of tempdb activity. Tempdb is the working area for SQL Server. High levels of transactions in tempdb relative to the overall server figure may be of interest ( ii )

FullScans

Number of unrestricted full scans per second  (average) for the time period

Number of unrestricted full scans per second. These can be either base-table or full-index scans. ( iii )

This figure should be low

IndexSearches

Number of index lookups per second (average) for the time period

This counter shows the number of index searches SQL Server is performing. SQL Server uses index searches to start range scans, to retrieve a specific record by using one index, and to reposition within an index.

PageSplits

Number of page splits per second (average)  when an index page overflows for the time period

This figure should be low.

( iv )

LockRequests

The total number of lock requests per second (average) for the time period

Locks have to exist within lower memory, excessive lock requests may result in memory issues, likewise other factors may reduce the available memory for locks

If locks become an area of concern greater granularity can be obtained by measuring lock types ( v )

LockTimeouts

The number of times per second (average)  that a lock could not be obtained when requested for the time period

Excessive lock timeouts may indicate application problems, memory pressures and/or excessive number of requests ( v )

Deadlocks

The total number of deadlocks that occurred during the monitored time period

This counter should be zero

LatchWaits

The number of waits per second (average) on latch requests.

Latches are lightweight “locks” used to aid performance. This figure should be low. A high figure would indicate resource issues, probably memory.

LockWaits

The number of times per second (average) that a thread had to wait for a lock during the monitored time period

This value should be low ( v ) if this counter is high it may indicate performance issues.

Compilations

The number of times per second (average) that sql compilations occur for the monitored  time period

Used as a measure of work

WorkFiles

The number of workfiles, such as created internally during query execution, per second (average) for the monitored  time period.

Used as a measure of work

( vi )

WorkTables

The number of worktables, such as created internally during sorts, unions, groupings, per second (average) for the monitored  time period.

Used as a measure of work

( vi )

HourCounter

Numeric sequence to provide a unique row identifier.

 

  

Notes:

 

(i)  Despite the documentation within BOL, KB813199 tells us that the actual number returned is measured in ticks not milliseconds ( or seconds ) to get the actual figure in milliseconds multiply the result by 32

( ii )      Tempdb is often overlooked when considering performance issues.

·          Analysis of the work within tempdb may indicate excessive creation of temporary tables, large numbers of sort and group by operations.

·          Activity within tempdb should be viewed along with workfiles and worktables.

·          A heavily used tempdb may slow the entire server.

 

( iii )     Full scans also include activity from the system databases and those that SQL Server initiates its self.

·          Whilst in some situations table scans may be more efficient this figure should be viewed for fluctuations.

·          There should be a ratio between the figures of  fullscans and index searches, say, 15% – 85%

 

( iv )     As with any server counter this includes page splits within system and tempdb databases.

·          Certain tools including profiler can distort this figure.

·          Page splits may be reduced by introducing a fill factor within indexes.

·          Too many page splits are bad, ideally this value should be very low but typically often isn’t.

·          A page split occurs when a new record is inserted within an existing page and there is no room. Note that updates can cause page splits.

·          Page splits are expensive in performance terms and are also likely to result in data fragmentation in the database.

 

( v )      There are 49 perfmon counters which provide information about locks.

·          These counters measure the SQL Server locks they do not directly measure locks in the sense of locking and blocking activity within user databases.

·          Use the counters together as a measure of server activity.

·          Lock Requests / second  =  locks requested by server processes

·          Lock Timeouts / second =  Number of times a lock was not available when requested

·          Lock Waits / second = Number of times a lock request caused a thread wait

 

( vi )    Worktables and Workfiles will tend to indicate use of tempdb.

·          SQL Server will always try to perform operations within memory but when this is not possible will use tempdb.

·          These counters should be used with the transactions/sec for tempdb as an indication of tempdb performance.

·          Placing tempdb on a separate and/or faster disk array may increase performance.

·          Any disk activity in tempdb will technically degrade performance.

 

(vii )    This value should always be high , >90%, and is an indication of how often cached plans satisfy

             calls.

·          For 32bit SQL Server the procedure cache may not exist in extended memory so can be constrained by other memory pressures.

·          A low cache hit ratio is an indication of a lack of server memory or of less than optimal code.

·          A low cache hit ratio can also be produced where the volume of  transactions and number of databases is such that the procedure cache cannot support them.

·          The procedure cache is unlikely to grow much more than 1 Gb

 

Procedure Cache Counters

 

Counter Type

%age ratio

Misc. Normalized Trees                                                                                                          

90.45

Execution Contexts                                                                                                             

49.28

Cursors                                                                                                                         

99.93

Trigger Plans                                                                                                                  

99.81

Adhoc Sql Plans                                                                                                                 

92.42

Prepared Sql Plans                                                                                                             

99.53

Procedure Plans                                                                                                                

51.60

_Total                                                                                                                          

50.98

4        Deployment 

Create a database to hold the information, the code within this document expects the database to be called ServerAdmin.

 

Choose which databases to collect data from. The names of the databases are embedded in the objects:-

  • Table  -        Daily_Statistics
  • Table  -        Growth_Statistics8to6
  • Table  -        Growth_Statistics
  • Table  -        PerfData_Wide
  • Procedure      InitialiseStatistics
  • Procedure      GenerateStatistics_Hourly
  • Procedure      RollupDailyData
  • Procedure      RollupDailyData8to6

 

Hopefully setting up the process to gather statistics on a server will only happen once. The disadvantages and complexity of attempting to code the procedures and tables dynamically seems to far outweigh the slight inconvenience of the hard coding.  

 

The following explains what and where needs to be coded and how to add additional counters or change counters collected.

 

  • The tables Daily_Statistics , Growth_Statistics8to6 and Growth_Statistics  require a column for each collected system statistic ( SQL Server system statistics and perfmon counters )
  • The table PerfData_Wide  requires a column for each perfmon counter collected 
  • The stored procedures listed above read specific values from the SQL Server table Master..sysperfinfo. The name of the counters can be extracted by viewing the sysperfinfo table

 

Sample view below:-

 

object_name

counter_name

instance_name

cntr_value

cntr_type

SQLServer:Access Methods                                                                                                       

FreeSpace Scans/sec                                                                                                            

                                                                                                                                

629381726

272696320

SQLServer:Access Methods                                                                                                       

FreeSpace Page Fetches/sec                                                                                                     

                                                                                                                                

629388361

272696320

SQLServer:Access Methods                                                                                                       

Pages Allocated/sec                                                                                                             

                                                                                                                               

6269147

272696320

SQLServer:Access Methods                                                                                                        

Extents Allocated/sec                                                                                                          

                                                                                                                                

1886276

272696320

SQLServer:Access Methods                                                                                                       

Mixed page allocations/sec                                                                                                      

                                                                                                                               

660485

272696320

SQLServer:Access Methods                                                                                                        

Extent Deallocations/sec                                                                                                       

                                                                                                                                

5056485

272696320

SQLServer:Access Methods                                                                                                       

Page Deallocations/sec                                                                                                         

                                                                                                                               

0

272696320

SQLServer:Access Methods                                                                                                       

Page Splits/sec                                                                                                                

                                                                                                                               

2146312

272696320

SQLServer:Access Methods                                                                                                        

Table Lock Escalations/sec                                                                                                     

                                                                                                                                

500110

272696320

SQLServer:SQL Statistics                                                                                                       

Batch Requests/sec                                                                                                              

                                                                                                                               

135081855

272696320

SQLServer:SQL Statistics                                                                                                        

Auto-Param Attempts/sec                                                                                                        

                                                                                                                                

917309

272696320

SQLServer:SQL Statistics                                                                                                       

Failed Auto-Params/sec                                                                                                          

                                                                                                                               

503084

272696320

SQLServer:SQL Statistics                                                                                                       

Safe Auto-Params/sec                                                                                                           

                                                                                                                                

402965

272696320

SQLServer:SQL Statistics                                                                                                        

Unsafe Auto-Params/sec                                                                                                         

                                                                                                                                

11252

272696320

SQLServer:SQL Statistics                                                                                                       

SQL Compilations/sec                                                                                                            

                                                                                                                               

2065534

272696320

SQLServer:SQL Statistics                                                                                                        

SQL Re-Compilations/sec                                                                                                        

                                                                                                                                

63576

272696320

SQLServer:Cache Manager                                                                                                        

Cache Hit Ratio                                                                                                                 

Misc. Normalized Trees                                                                                                         

181912

537003008

 For instance to obtain the value for Page Splits this counter must have a column in each table e.g. 

 CREATE TABLE [dbo].[Daily_Statistics] (

            [TheDate] [datetime] NOT NULL ,

            .

           .

            [PageSplits] [numeric](18, 6) NULL ,

            .

               .

 The value for this counter then must be extracted from sysperfinfo within the stored procedures

e.g. 

 --

insert into ServerAdmin.dbo.TranTable(counter,value)

select counter_name,cntr_value from master.dbo.sysperfinfo  where counter_name in ('Full Scans/sec','Index Searches/sec','Page Splits/sec','Workfiles Created/sec',

'Worktables Created/sec','SQL Compilations/sec','latch waits/sec')

--   

 The values obtained at each hour sample are stored and then averaged at the next sample time, the results are then written to the daily_statistics table.

 

  • Once the counters required have been defined create the empty tables and the stored procedures in the ServerAdmin database. The table Perfdata_Wide must have a value 1 inserted into column NumKey. 
  • It is vital that names exactly match sysperfinfo data. 
  • Follow the assignment of default and null values for counters as in the existing table structures 
  • The objects,  stored procedure InitialiseMonitorStats  and table  TranTable do not require any hard coded values regarding counters or user databases. 

Decide the running times:-

 

  • The procedures should be executed by  scheduled agent jobs 
  • If data is to be gathered every hour every day then the two procedures InitialiseStatistics and InitialiseMonitorStats need to be run once only, just after the hour, to initialise the data collection. 
  • These stored procedures seed the tables prior to collection, if stats are only to be gathered say from 08:00 – 21:00 then execute the procedures at just after 08:00 every morning. 
  • To collect the statistics run the procedure  GenerateStatistics_Hourly every hour, just after the hour, in our example above we would schedule it to run from 09:01 to 21:01.
  • To collect statistics 24 x 7 we would schedule it to run every hour just after the hour 7 days a week. 

Rolling up the daily data:-

 

  • There are two stored procedures, RollupDailyData and RollupDailyData8to6 these average the data collected during the day into a daily figure.
  • RollupDailyData  holds the average for the entire day
  • RollupDailyData8to6  holds the average between 08:00 and 18:00 
  • These two stored procedures should be run just after midnight. 
  • The granularity of the averaged data can be adjusted by altering the collection values within the stored procedure, thus an additional table and procedure could be created to average the counters between 03:00 and 06:00 for example. 

To increase granularity:- 

  • Note that increasing granularity increases the amount of data collected and does not affect the accuracy of the rolled up data. 
  • The stored procedure  GenerateStatistics_Hourly averages the data based upon an hour interval. 

To collect data every 5 minutes replace the divisor 3600 in the proc with 300 and alter the job schedule to run every 5 minutes. 

 ,abs(([_Total]-@TotalTrans)/3600)

,abs(([Lock Requests/sec]-@LockRequests)/3600)

,abs(([Lock Timeouts/sec]-@LockTimeouts)/3600.0)

 

To

 

,abs(([_Total]-@TotalTrans)/300)

,abs(([Lock Requests/sec]-@LockRequests)/300)

,abs(([Lock Timeouts/sec]-@LockTimeouts)/300.0)

 

 If values obtained for some counters are low they may be being rounded, make the divisor a decimal, add a .0   

 

5.         Notes Concerning sp_monitor 

  • sp_monitor  is a system stored procedure which returns system information in conjunction with the table spt_monitor.
  • Each call to sp_monitor, however, resets the values, thus calls should not be made to this stored procedure other than through the statistical gathering procedures.
  • This procedure does not have public permissions so can only called by a SysAdmin or user granted dbo of master.

 6.  Appendix 1  Table Definitions

 

Table Daily_Statistics                   ( Holds hourly data ) 

 

Use ServerAdmin

go

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[Daily_Statistics]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)

drop table [dbo].[Daily_Statistics]

GO

 

CREATE TABLE [dbo].[Daily_Statistics] (

            [TheDate] [datetime] NOT NULL ,

            [CacheHitRatio] [real] NULL ,

            [cpu_busy] [bigint] NULL ,

            [io_busy] [bigint] NULL ,

            [Logins] [bigint] NULL ,

            [Connections] [bigint] NULL ,

            [PacketsReceived] [bigint] NULL ,

            [PacketsSent] [bigint] NULL ,

            [PacketErrors] [bigint] NULL ,

            [Reads] [bigint] NULL ,

            [Writes] [bigint] NULL ,

            [RWErrors] [bigint] NULL ,

            [TotalTransactions] [numeric](20, 10) NULL ,

--

-- put user database names here

--

UserDB1Trans [numeric](20, 10) NULL ,

UserDB2Trans [numeric](20, 10) NULL ,

UserDB3Trans [numeric](20, 10) NULL ,

UserDB4Trans [numeric](20, 10) NULL ,

--

-- this is all that changes server to server

--

            [TempdbTransactions] [numeric](20, 10) NULL ,

            [FullScans] [bigint] NULL ,

            [IndexSearches] [bigint] NULL ,

            [PageSplits] [numeric](20, 10) NULL ,

            [LockRequests] [bigint] NULL ,

            [LockTimeouts] [numeric](20, 10) NULL ,

            [Deadlocks] [bigint] NULL ,

            [LatchWaits] [numeric](20, 10) NULL ,

            [LockWaits] [numeric](20, 10) NULL ,

            [Compilations] [bigint] NULL ,

            [WorkFiles] [bigint] NULL ,

            [WorkTables] [bigint] NULL ,

            [HourCounter] [int] IDENTITY (1, 1) NOT NULL

) ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[Daily_Statistics] WITH NOCHECK ADD

            CONSTRAINT [PK_Daily_Statistics] PRIMARY KEY  CLUSTERED

            (

                        [TheDate]

            ) WITH  FILLFACTOR = 100  ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[Daily_Statistics]  ADD

            CONSTRAINT [DF_Daily_Statistics_TheDate] DEFAULT (getdate()) FOR [TheDate],

            CONSTRAINT [UC_Daily_Statistics] UNIQUE  NONCLUSTERED

            (

                        [HourCounter]

            ) WITH  FILLFACTOR = 100  ON [PRIMARY]

GO

 

 Table Growth_Statistics8to6          ( Holds rolled up daily data for the hours 08:00 to 18:00 )

( customise to suit core hours etc. ) 

 

CREATE TABLE [dbo].[Growth_ Statistics8to6] (

            [TheDate] [datetime] NOT NULL ,

            [CacheHitRatio] [real] NULL ,

            [cpu_busy] [bigint] NULL ,

            [io_busy] [bigint] NULL ,

            [Logins] [bigint] NULL ,

            [Connections] [bigint] NULL ,

            [PacketsReceived] [bigint] NULL ,

            [PacketsSent] [bigint] NULL ,

            [PacketErrors] [bigint] NULL ,

            [Reads] [bigint] NULL ,

            [Writes] [bigint] NULL ,

            [RWErrors] [bigint] NULL ,

            [TotalTransactions] [numeric](20, 10) NULL ,

--

-- put user database names here

--

UserDB1Trans [numeric](20, 10) NULL ,

UserDB2Trans [numeric](20, 10) NULL ,

UserDB3Trans [numeric](20, 10) NULL ,

UserDB4Trans [numeric](20, 10) NULL ,

--

-- this is all that changes server to server

--

            [TempdbTransactions] [numeric](20, 10) NULL ,

            [FullScans] [bigint] NULL ,

            [IndexSearches] [bigint] NULL ,

            [PageSplits] [numeric](20, 10) NULL ,

            [LockRequests] [bigint] NULL ,

            [LockTimeouts] [numeric](20, 10) NULL ,

            [Deadlocks] [bigint] NULL ,

            [latchWaits] [numeric](20, 10) NULL ,

            [LockWaits] [numeric](20, 10) NULL ,

            [Compilations] [bigint] NULL ,

            [WorkFiles] [bigint] NULL ,

            [WorkTables] [bigint] NULL

) ON [PRIMARY]

GO

ALTER TABLE [dbo].[Growth_ Statistics8to6] WITH NOCHECK ADD

            CONSTRAINT [PK_Growth_ Statistics8to6] PRIMARY KEY  CLUSTERED

            (

                        [TheDate]

            ) WITH  FILLFACTOR = 100  ON [PRIMARY]

GO

--

-- as the populate job runs after midnight we need the previous days date

--

ALTER TABLE [dbo].[Growth_ Statistics8to6] ADD

            CONSTRAINT [DF_Growth_ Statistics8to6_TheDate] DEFAULT (getdate()-1) FOR [TheDate]

GO

 

Table Growth_Statistics                ( Holds rolled up daily data ) 

 

CREATE TABLE [dbo].[Growth_Statistics] (

            [TheDate] [datetime] NOT NULL ,

            [CacheHitRatio] [real] NULL ,

            [cpu_busy] [bigint] NULL ,

            [io_busy] [bigint] NULL ,

            [Logins] [bigint] NULL ,

            [Connections] [bigint] NULL ,

            [PacketsReceived] [bigint] NULL ,

            [PacketsSent] [bigint] NULL ,

            [PacketErrors] [bigint] NULL ,

            [Reads] [bigint] NULL ,

            [Writes] [bigint] NULL ,

            [RWErrors] [bigint] NULL ,

            [TotalTransactions] [numeric](20, 10) NULL ,

--

--

-- put user database names here

--

UserDB1Trans [numeric](20, 10) NULL ,

UserDB2Trans [numeric](20, 10) NULL ,

UserDB3Trans [numeric](20, 10) NULL ,

UserDB4Trans [numeric](20, 10) NULL ,

--

-- this is all that changes server to server

--

            [TempdbTransactions] [numeric](20, 10) NULL ,

            [FullScans] [bigint] NULL ,

            [IndexSearches] [bigint] NULL ,

            [PageSplits] [numeric](20, 10) NULL ,

            [LockRequests] [bigint] NULL ,

            [LockTimeouts] [numeric](20, 10) NULL ,

            [Deadlocks] [bigint] NULL ,

            [latchWaits] [numeric](20, 10) NULL ,

            [LockWaits] [numeric](20, 10) NULL ,

            [Compilations] [bigint] NULL ,

            [WorkFiles] [bigint] NULL ,

            [WorkTables] [bigint] NULL

) ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[Growth_Statistics] WITH NOCHECK ADD

            CONSTRAINT [PK_Growth_Statistics] PRIMARY KEY  CLUSTERED

            (

                        [TheDate]

            ) WITH  FILLFACTOR = 100  ON [PRIMARY]

GO

--

-- as the populate job runs after midnight we need the previous days date

--

 

ALTER TABLE [dbo].[Growth_Statistics] ADD

            CONSTRAINT [DF_Growth_Statistics_TheDate] DEFAULT (getdate()-1) FOR [TheDate]

GO

 

 Table  PerfData_Wide                   ( Holds working values of perfmon counters ) 

 

CREATE TABLE [dbo].[PerfData_Wide] (

            [UserDB1] [int] NOT NULL ,

            [UserDB2] [int] NOT NULL ,

            [UserDB3] [int] NOT NULL ,

            [UserDB4] [int] NOT NULL ,

            [tempdb] [int] NOT NULL ,

            [_Total] [int] NOT NULL ,

            [Lock Requests/sec] [bigint] NOT NULL ,

            [Lock Timeouts/sec] [int] NOT NULL ,

            [Number of Deadlocks/sec] [int] NOT NULL ,

            [Lock Waits/sec] [int] NOT NULL ,

            [latch waits/sec] [int] NOT NULL ,

            [Full Scans/sec] [int] NOT NULL ,

            [Workfiles Created/sec] [int] NOT NULL ,

            [Worktables Created/sec] [int] NOT NULL ,

            [Index Searches/sec] [int] NOT NULL ,

            [Page Splits/sec] [int] NOT NULL ,

            [SQL Compilations/sec] [int] NOT NULL ,

            [NumKey] [int] NOT NULL

) ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[PerfData_Wide] WITH NOCHECK ADD

            CONSTRAINT [PK_PerfData_Wide] PRIMARY KEY  CLUSTERED

            (

                        [NumKey]

            )  ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[PerfData_Wide] ADD

            CONSTRAINT [DF_PerfData_Wide_BS1] DEFAULT (0) FOR [UserDB1],

            CONSTRAINT [DF_PerfData_Wide_BS2] DEFAULT (0) FOR [UserDB2],

            CONSTRAINT [DF_PerfData_Wide_BS3] DEFAULT (0) FOR [UserDB3],

            CONSTRAINT [DF_PerfData_Wide_BS4] DEFAULT (0) FOR [UserDB4],

            CONSTRAINT [DF_PerfData_Wide_tempdb] DEFAULT (0) FOR [tempdb],

            CONSTRAINT [DF_PerfData_Wide__Total] DEFAULT (0) FOR [_Total],

            CONSTRAINT [DF_PerfData_Wide_Lock Requests/sec] DEFAULT (0) FOR [Lock Requests/sec],

            CONSTRAINT [DF_PerfData_Wide_Lock Timeouts/sec] DEFAULT (0) FOR [Lock Timeouts/sec],

            CONSTRAINT [DF_PerfData_Wide_Number of Deadlocks/sec] DEFAULT (0) FOR [Number of Deadlocks/sec],

            CONSTRAINT [DF_PerfData_Wide_Lock Waits/sec] DEFAULT (0) FOR [Lock Waits/sec],

            CONSTRAINT [DF_PerfData_Wide_latch waits/sec] DEFAULT (0) FOR [latch waits/sec],

            CONSTRAINT [DF_PerfData_Wide_Full Scans/sec] DEFAULT (0) FOR [Full Scans/sec],

            CONSTRAINT [DF_PerfData_Wide_Workfiles Created/sec] DEFAULT (0) FOR [Workfiles Created/sec],

            CONSTRAINT [DF_PerfData_Wide_Worktables Created/sec] DEFAULT (0) FOR [Worktables Created/sec],

            CONSTRAINT [DF_PerfData_Wide_Index Searches/sec] DEFAULT (0) FOR [Index Searches/sec],

            CONSTRAINT [DF_PerfData_Wide_Page Splits/sec] DEFAULT (0) FOR [Page Splits/sec],

            CONSTRAINT [DF_PerfData_Wide_SQL Compilations/sec] DEFAULT (0) FOR [SQL Compilations/sec]

GO

 

 Table  TranTable      ( Holds working values of perfmon counters for database transactions ) 

 

CREATE TABLE [dbo].[TranTable] (

            [NumKey] [int] IDENTITY (1, 1) NOT NULL ,

            [counter] [varchar] (100)  NOT NULL ,

            [value] [bigint] NOT NULL

) ON [PRIMARY]

GO

 

ALTER TABLE [dbo].[TranTable] ADD

            CONSTRAINT [UK_TranTable] UNIQUE  NONCLUSTERED

            (

                        [NumKey]

            ) WITH  FILLFACTOR = 100  ON [PRIMARY]

GO

 

 7.  Appendix 2  Stored procedures

 

Procedure  InitialiseMonitorStats  ( initialises the values generated by sp_monitor ) 

 

CREATE proc dbo.InitialiseMonitorStats

-- ============================================================= 

--  Procedure:              InitialiseMonitorStats                         

--  Written by:            (c) Colin Leversuch-Roberts 

--                                  www.kelemconsulting.co.uk                              

--                                                                

--  Purpose:                 Part of set of procs and tables to monitor trends in sql server usage

--                                                               

--  System:                   DBA Monitoring - ServerAdmin

--

--  Input Paramters:     none

--                                 

--  Returns :                 none                                     

--                                                               

--  Usage:                   Called by scheduled task

--                                                               

--  Notes:                     This proc is part of the set:-    InitialiseMonitorStats

--                                                                                  InitialiseStatistics

--                                                                                  GenerateStatistics_Hourly

--                                                                                  RollupDailyData

--                                                                                  RollupDailyData9_5

--

--                                  This procedure initialises the master..spt_monitor table,

--                                  provided precisely for this purpose.

--                                  In a 7 x 24 env only has to be run the once or if we are re-initialising the stats

--                                  after a server rebuild or such

--                                  Where monitoring is not 7x24 then run this to initialise the stats

--                                  one hour prior to the stat gathering proc.

--

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              9 March 2004   Initial Release                             

-- ============================================================= 

as

Set nocount on

declare @cpu_busy_init bigint,@io_busy_init bigint

declare @packet_received_init bigint,@packet_sent_init bigint

declare @packet_error_init bigint, @connections_init bigint

declare @total_reads_init bigint,@total_writes_init bigint

declare @rw_errors_init bigint

--

select

@cpu_busy_init = cpu_busy

,@io_busy_init =  io_busy

,@packet_received_init = pack_received

,@packet_sent_init = pack_sent

,@connections_init = connections

,@packet_error_init = pack_errors

,@total_reads_init = total_read

,@total_writes_init = total_write

,@rw_errors_init = total_errors

from master.dbo.spt_monitor

--

-- select * from master.dbo.spt_monitor

--

exec master.dbo.sp_monitor

--

insert into ServerAdmin.dbo.Daily_Statistics(

cpu_busy

,io_busy

,PacketsReceived

,PacketsSent

,Connections

,PacketErrors

,Reads

,Writes

,RWErrors)

select

(cpu_busy - @cpu_busy_init) *32                    --see KB813199

,(io_busy - @io_busy_init) * 32                       --see KB813199

,(pack_received - @packet_received_init)/3600

, (pack_sent - @packet_sent_init)/3600

, (connections - @connections_init)/3600

, (pack_errors - @packet_error_init)    -- count not an average

, (total_read - @total_reads_init)/3600

, (total_write - @total_writes_init)/3600

, (total_errors - @rw_errors_init)                    -- count not an average

from master.dbo.spt_monitor

GO

 

Procedure  InitialiseStatistics  ( initialise and collect starting values for perfmon counters ) 

 

Create Proc dbo.InitialiseStatistics

-- ============================================================= 

--  Procedure:              InitialiseStatistics                         

--  Written by:            (c) Colin Leversuch-Roberts 

--                                  www.kelemconsulting.co.uk                              

--                                                               

--  Purpose:                 Part of set of procs and tables to monitor trends in sql server usage

--                                                                

--  System:                   DBA Monitoring - ServerAdmin

--

--  Input Paramters:     none

--                                 

--  Returns :                 none                                     

--                                                                

--  Usage:              Called by scheduled task

--                                                               

--  Notes:                     This proc is part of the set:-    InitialiseMonitorStats

--                                                                                  InitialiseStatistics

--                                                                                  GenerateStatistics_Hourly

--                                                                                  RollupDailyData

--                                                                                  RollupDailyData8to8

--

--                                  This procedure initialises the data in the ServerAdmin..TranTable and ServerAdmin..PerfData_Wide

--                                  used during the day for calculating stats

--                                  In a 7 x 24 env only has to be run the once or if we are re-initialising the stats

--                                  after a server rebuild or such

--                                  Where monitoring is not 7x24 then run this to initialise the stats

--                                  one hour prior to the stat gathering proc.

--

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              9 March 2004   Initial Release                             

-- ============================================================= 

as

set nocount on

declare @count tinyint,@counter varchar(100),@value int,@cmd nvarchar(250)

--

-- clear the temporary storage table

--

truncate table ServerAdmin.dbo.TranTable

--

-- now populate the temporary storage table

--

-- NOTE

-- We have to store the values at hour intervals and then do the maths at each successive interval

-- this means the following are an average for the hour period

--

-- transactions per second for user databases, tempdb and total for server

--

insert into ServerAdmin.dbo.TranTable

select instance_name,cntr_value from master.dbo.sysperfinfo where counter_name ='transactions/sec'

and instance_name in('_Total',

'UserDB1',

UserDB2’,

UserDB3’,

UserDB4’,

'tempdb')

--

-- locks -- requests  timeouts  deadlocks lock waits

-- here we will store the number of requests, waits and timeouts for the period then average per/sec for the period

-- deadlocks we will store the actual number within the period

--

insert into ServerAdmin.dbo.TranTable

select counter_name,cntr_value from master.dbo.sysperfinfo where counter_name in('Lock Requests/sec','Lock Timeouts/sec','Number of Deadlocks/sec','Lock Waits/sec')

and instance_name='_Total'

--

-- scans --  tables scans  index scans  page splits workfiles and tables compilations

-- averaged for the period per sec

--

insert into ServerAdmin.dbo.TranTable

select counter_name,cntr_value from master.dbo.sysperfinfo  where counter_name in ('Full Scans/sec','Index Searches/sec','Page Splits/sec','Workfiles Created/sec',

'Worktables Created/sec','SQL Compilations/sec','latch waits/sec')

--   

-- Now "cross tab" to a wide table so i can read the values hourly

--

set @count=1

while @count<=(select max(NumKey) from ServerAdmin.dbo.TranTable)

BEGIN

            select @counter=quotename(rtrim(counter)),@value=isnull(value,0) from ServerAdmin.dbo.TranTable where NumKey=@count

            set @cmd= 'Update ServerAdmin.dbo.PerfData_wide  set '+@counter+' = '+convert(varchar,@value)+' where numkey=1'

            exec (@cmd)

            --print @cmd

            set @count=@count+1

END

--endwhile

GO

 Procedure GenerateStatistics_Hourly   ( Run every hour to collect and calculate data values ) 

 

CREATE Procedure dbo.GenerateStatistics_Hourly

-- ============================================================= 

--  Procedure:              GenerateStatistics_Hourly                         

--  Written by:              (c) Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk                               

--                                                                

--  Purpose:                 Part of set of procs and tables to monitor trends in sql server usage

--                                                               

--  System:                   DBA Monitoring - ServerAdmin

--

--  Input Paramters:     none

--                                 

--  Returns :                 none                                     

--                                                               

--  Usage:                    Called by scheduled task

--                                                                

--  Notes:                     This proc is part of the set:-    InitialiseMonitorStats

--                                                                                  InitialiseStatistics

--                                                                                  GenerateStatistics_Hourly

--                                                                                  RollupDailyData

--                                                                                  RollupDailyData8to8

--

--                                  This procedure generates the hourly data in ServerAdmin..Daily_Statistics from the ServerAdmin..TranTable

--                                  and ServerAdmin..PerfData_Wide tables

--

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              9 March 2004   Initial Release                             

-- ============================================================= 

as

set nocount on

--

-- this will run at every hour

--

declare @cpu_busy_init bigint,@io_busy_init bigint,@cpu_busy_upd bigint,@io_busy_upd bigint

declare @packet_received_init bigint,@packet_sent_init bigint,@packet_received_upd bigint,@packet_sent_upd bigint

declare @packet_error_init bigint, @connections_init bigint,@packet_error_upd bigint, @connections_upd bigint

declare @total_reads_init bigint,@total_writes_init bigint,@total_reads_upd bigint,@total_writes_upd bigint

declare @rw_errors_init bigint,@rw_errors_upd bigint

declare @cachehit real,@chachebase real,@userconnections bigint

declare @count tinyint,@counter varchar(100),@value bigint,@cmd nvarchar(250)

--

-- user database stats

--

declare @UserDB1Trans bigint

declare @UserDB2Trans bigint

declare @UserDB3Trans bigint

declare @UserDB4Trans bigint

--

declare @TempdbTrans bigint

declare @TotalTrans bigint

--

declare @LockRequests bigint,@LockTimeouts bigint,@Deadlocks bigint,@LockWaits bigint,@FullScans bigint,@Workfiles bigint

declare @Worktables bigint,@IndexSearches bigint,@PageSplits bigint,@Compilations bigint,@LatchWaits bigint

--

-- user connections - point in time value

select @userconnections=cntr_value  from master.dbo.sysperfinfo with (nolock) where counter_name='user connections'

-- cache hit ratio - point in time value

select @cachehit=cntr_value from master.dbo.sysperfinfo with (nolock) where counter_name='cache hit ratio' and instance_name='_Total'

select @chachebase=cntr_value from master.dbo.sysperfinfo with (nolock) where counter_name='cache hit ratio base' and instance_name='_Total'

--

-- read the spt_monitor table in master then re-seed it

--

select

@cpu_busy_init = cpu_busy

,@io_busy_init =  io_busy

,@packet_received_init = pack_received

,@packet_sent_init = pack_sent

,@connections_init = connections

,@packet_error_init = pack_errors

,@total_reads_init = total_read

,@total_writes_init = total_write

,@rw_errors_init = total_errors

from master.dbo.spt_monitor

--

exec master.dbo.sp_monitor

--

select

@cpu_busy_upd = (cpu_busy - @cpu_busy_init)

,@io_busy_upd = (io_busy - @io_busy_init)

,@packet_received_upd = (pack_received - @packet_received_init)/3600

,@packet_sent_upd = (pack_sent - @packet_sent_init)/3600

,@connections_upd = (connections - @connections_init)/3600

,@packet_error_upd = (pack_errors - @packet_error_init)                 -- this is a count not an average

,@total_reads_upd = (total_read - @total_reads_init)/3600

,@total_writes_upd = (total_write - @total_writes_init)/3600

,@rw_errors_upd = (total_errors - @rw_errors_init)                           -- this is a count not an average

from master.dbo.spt_monitor

--

-- clear the temporary storage table

--

truncate table ServerAdmin.dbo.TranTable

--

-- now populate the temporary storage table

--

-- NOTE

-- We have to store the values at hour intervals and then do the maths at each successive interval

-- this means the following are an average for the hour period

--

-- transactions per second for user databases, tempdb and total for server

--

insert into ServerAdmin.dbo.TranTable

select instance_name,cntr_value from master.dbo.sysperfinfo where counter_name ='transactions/sec'

and instance_name in('_Total',

'UserDB1',

UserDB2’,

UserDB3’,

UserDB4’,

'tempdb')

--

-- locks -- requests  timeouts  deadlocks lock waits

-- here we will store the number of requests, waits and timeouts for the period then average per/sec for the period

-- deadlocks we will store the actual number within the period

--

insert into ServerAdmin.dbo.TranTable

select counter_name,cntr_value from master.dbo.sysperfinfo where counter_name in('Lock Requests/sec','Lock Timeouts/sec','Number of Deadlocks/sec','Lock Waits/sec')

and instance_name='_Total'

--

-- scans --  tables scans  index scans  page splits workfiles and tables compilations

-- averaged for the period per sec

--

insert into ServerAdmin.dbo.TranTable

select counter_name,cntr_value from master.dbo.sysperfinfo  where counter_name in ('Full Scans/sec','Index Searches/sec','Page Splits/sec','Workfiles Created/sec',

'Worktables Created/sec','SQL Compilations/sec','latch waits/sec')

--   

--                                                                                               

Select

--

@UserDB1Trans = UserDB1,

@UserDB2Trans = UserDB2,

@UserDB3Trans = UserDB3,

@UserDB4Trans = UserDB4,

--

@tempdbTrans = [tempdb],

@TotalTrans = [_Total],

--

@LockRequests = [Lock Requests/sec],

@LockTimeouts = [Lock Timeouts/sec],

@Deadlocks = [Number of Deadlocks/sec],

@LockWaits = [Lock Waits/sec],

@LatchWaits = [Latch waits/sec],

@FullScans = [Full Scans/sec],

@Workfiles = [Workfiles Created/sec],

@Worktables = [Worktables Created/sec],

@IndexSearches = [Index Searches/sec],

@PageSplits = [Page Splits/sec],

@Compilations = [SQL Compilations/sec]

from ServerAdmin.dbo.PerfData_Wide

where numkey = 1

--

-- Now "cross tab" to a wide table so i can read the values hourly

--

set @count=1

while @count<=(select max(NumKey) from ServerAdmin.dbo.TranTable)

BEGIN

            select @counter=quotename(rtrim(counter)),@value=isnull(value,0) from ServerAdmin.dbo.TranTable where NumKey=@count

            set @cmd= 'Update ServerAdmin.dbo.PerfData_wide  set '+@counter+' = '+convert(varchar,@value)+' where numkey=1'

            exec (@cmd)

            --print @cmd

            set @count=@count+1

END

--endwhile

--

--  Now write this data to the daily stats table

--

insert into ServerAdmin.dbo.Daily_Statistics

(

CacheHitRatio,

cpu_busy,io_busy,

Logins,

UserDB1Trans,

UserDB2Trans,

UserDB3Trans,

UserDB4Trans,

TempdbTransactions,

TotalTransactions,

LockRequests,LockTimeouts,Deadlocks,LockWaits,latchWaits,

FullScans,WorkFiles,WorkTables,IndexSearches,PageSplits,Compilations,Connections,PacketsReceived,PacketsSent,PacketErrors,Reads,Writes,RWErrors)

 

Select

(@cachehit/@chachebase *100.00),

abs(@cpu_busy_upd *32),      -- see KB813199

abs(@io_busy_upd*32),                      -- see KB813199

@userconnections,

abs(([AspState]-@AspStateTrans)/3600),

abs(([PerformanceTest]-@PerformanceTestTrans)/3600),

abs(([TitanPilot]-@TitanPilotTrans)/3600),

abs(([TitanProduction]-@TitanProductionTrans)/3600),

abs(([tempdb]-@tempdbTrans)/3600),

abs(([_Total]-@TotalTrans)/3600),

abs(([Lock Requests/sec]-@LockRequests)/3600),

abs(([Lock Timeouts/sec]-@LockTimeouts)/3600),

abs([Number of Deadlocks/sec]-@Deadlocks),

abs(([Lock Waits/sec]-@LockWaits)/3600),

abs(([Latch waits/sec]-@LatchWaits)/3600),

abs(([Full Scans/sec]-@FullScans)/3600),

abs(([Workfiles Created/sec]-@Workfiles)/3600),

abs(([Worktables Created/sec]-@Worktables)/3600),

abs(([Index Searches/sec]-@IndexSearches)/3600),

abs(([Page Splits/sec]-@PageSplits)/3600),

abs(([SQL Compilations/sec]-@Compilations)/3600),

abs(@connections_upd),

abs(@packet_received_upd),

abs(@packet_sent_upd),

abs(@packet_error_upd),

abs(@total_reads_upd),

abs(@total_writes_upd),

abs(@rw_errors_upd)

from ServerAdmin.dbo.PerfData_Wide

where numkey = 1

GO

 

 Procedure  RollupDailyData                           ( creates daily average from collected stats ) 

 

CREATE proc dbo.RollupDailyData

-- ============================================================= 

--  Procedure:              RollupDailyData                         

--  Written by:           (c)Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk                               

--                                                               

--  Purpose:                 Part of set of procs and tables to monitor trends in sql server usage

--                                                                

--  System:                   DBA Monitoring -

--

--  Input Paramters:     none

--                                 

--  Returns :                 none                                     

--                                                               

--  Usage:                   Called by scheduled task once a day after midnight

--                                  Alter the date functions if not running after midnight

--                                  averages 24 values from 23:00 to 23:00

--                                  making it run the day would be more complex and only relevant if hour 23 is critical

--                                                               

--  Notes:                     This proc is part of the set:-    InitialiseMonitorStats

--                                                                                  InitialiseStatistics

--                                                                                  GenerateStatistics_Hourly

--                                                                                  RollupDailyData

--                                                                                  RollupDailyData8to6

--

--                                  This procedure rolls up the entire daily stats into a set of averages for the day and

--                                  places the results in ServerAdmin..Growth_Statistics

--                                  This averages the stats over the day

--

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              12th jan 2006  Initial Release                             

-- ============================================================= 

as

set nocount on

--

--move data to daily summary

--

insert into ServerAdmin.dbo.Growth_Statistics

(CacheHitRatio,

cpu_busy,

io_busy,

Logins,

Connections,

PacketsReceived,

PacketsSent,

PacketErrors,

Reads,

Writes,

RWErrors,

--

TotalTransactions,

--

-- change user database entries here

--

UserDB1Trans,

UserDB2Trans,

UserDB3Trans,

UserDB4Trans,

--

TempdbTransactions,

--

FullScans,

IndexSearches,

PageSplits,

LockRequests,

LockTimeouts,

Deadlocks,

latchWaits,

LockWaits,

Compilations,

WorkFiles,

WorkTables

)

select

avg(CacheHitRatio),

avg(cpu_busy),

avg(io_busy),

avg(Logins),

avg(Connections),

avg(PacketsReceived),

avg(PacketsSent),

sum(PacketErrors),

avg(Reads),

avg(Writes),

sum(RWErrors),

--

avg(TotalTransactions),

--

-- change user database entries here

--

avg(UserDB1),

avg(UserDB2),

avg(UserDB3),

avg(UserDB4),

--

avg(TempdbTransactions),

--

avg(FullScans),

avg(IndexSearches),

avg(PageSplits),

avg(LockRequests),

avg(LockTimeouts),

sum(Deadlocks),

avg(LatchWaits),

avg(LockWaits),

avg(Compilations),

avg(WorkFiles),

avg(WorkTables)

from ServerAdmin.dbo.Daily_Statistics

where datepart(dd,getdate()-1)=datepart(dd,TheDate) and datepart(mm,getdate()-1)=datepart(mm,TheDate) and datepart(yy,getdate()-1)=datepart(yy,TheDate)

GO

 

 Procedure  RollupDailyData8to6      ( creates daily average from collected stats for hours 08:00 – 18:00)

o        Modify or create extra procedures to average data for specific time periods.

o        Create a new summary table and proc for each time period 

 

CREATE proc dbo.RollupDailyData8to6

-- ============================================================= 

--  Procedure:              RollupDailyData8to6                          

--  Written by:           (c)Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk                               

--                                                               

--  Purpose:                 Part of set of procs and tables to monitor trends in sql server usage

--                                                               

--  System:                   DBA Monitoring -

--

--  Input Paramters:     none

--                                 

--  Returns :                 none                                     

--                                                                

--  Usage:              Called by scheduled task once a day after midnight

--                                  Alter the date functions if not running after midnight

--                                                               

--  Notes:                     This proc is part of the set:-    InitialiseMonitorStats

--                                                                                  InitialiseStatistics

--                                                                                  GenerateStatistics_Hourly

--                                                                                  RollupDailyData

--                                                                                  RollupDailyData8to6

--

--                                  This procedure rolls up the entire daily stats into a set of averages for core hours and

--                                  places the results in ServerAdmin..Growth_Statistics8to6

--                                  This averages the stats from 08:00 - 18:00

--

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              12th jan 2006  Initial Release                             

-- ============================================================= 

as

set nocount on

--

--move data to daily summary hours 8 - 6

--

insert into ServerAdmin.dbo.Growth_Statistics8to6

(CacheHitRatio,

cpu_busy,

io_busy,

Logins,

Connections,

PacketsReceived,

PacketsSent,

PacketErrors,

Reads,

Writes,

RWErrors,

--

TotalTransactions,

--

-- change user database entries here

--

UserDB1Trans,

UserDB2Trans,

UserDB3Trans,

UserDB4Trans,

--

TempdbTransactions,

--

FullScans,

IndexSearches,

PageSplits,

LockRequests,

LockTimeouts,

Deadlocks,

latchWaits,

LockWaits,

Compilations,

WorkFiles,

WorkTables

)

select

avg(CacheHitRatio),

avg(cpu_busy),

avg(io_busy),

avg(Logins),

avg(Connections),

avg(PacketsReceived),

avg(PacketsSent),

sum(PacketErrors),

avg(Reads),

avg(Writes),

sum(RWErrors),

--

avg(TotalTransactions),

--

-- change user database entries here

--

avg(UserDB1),

avg(UserDB2),

avg(UserDB3),

avg(UserDB4),

--

avg(TempdbTransactions),

--

avg(FullScans),

avg(IndexSearches),

avg(PageSplits),

avg(LockRequests),

avg(LockTimeouts),

sum(Deadlocks),

avg(LatchWaits),

avg(LockWaits),

avg(Compilations),

avg(WorkFiles),

avg(WorkTables)

from ServerAdmin.dbo.Daily_Statistics

where datepart(dd,getdate()-1)=datepart(dd,TheDate)

and datepart(mm,getdate()-1)=datepart(mm,TheDate)

and datepart(yy,getdate()-1)=datepart(yy,TheDate)

and datepart(hh,TheDate) between 9 and 18

-- this gives values from 08:00 to 18:00

GO

 

8. Finally … the job scripts

 

To initialise the stats ( for a 24 x 7 has to be run once only )

 

exec ServerAdmin.dbo.InitialiseMonitorStats

exec ServerAdmin.dbo.InitialiseStatistics

 

To gather the stats  ( run a few seconds past each hour )

 

exec ServerAdmin.dbo.GenerateStatistics_Hourly

 

To rollup the data ( run just after midnight )

 

exec ServerAdmin.dbo.RollupDailyData

exec ServerAdmin.dbo.RollupDailyData8to6

 

To produce the result set in the report ( exists in a stored proc )

 

select datepart(hh,TheDate),

convert(int,CacheHitRatio),

cpu_busy,

io_busy,

Logins,

Connections,

PacketsReceived,

PacketsSent,

PacketErrors,

Reads,

Writes,

RWErrors,

convert(money,TotalTransactions),

convert(money,UserDB1Trans),

convert(money, UserDB2Trans),

convert(money, UserDB3Trans),

convert(money, UserDB4Trans),

convert(money,TempdbTransactions),

FullScans,

IndexSearches,

convert(money,PageSplits),

LockRequests,

convert(money,LockTimeouts),

Deadlocks,

convert(money,LatchWaits),

convert(money,LockWaits),

Compilations,

WorkFiles,

WorkTables

from dbo.Daily_Statistics

where datepart(dd,getdate()-1)=datepart(dd,TheDate) and datepart(mm,getdate()-1)=datepart(mm,TheDate) and datepart(yy,getdate()-1)=datepart(yy,TheDate)

--

-- last weeks summary

--

select datename(dw,TheDate),

convert(int,CacheHitRatio),

cpu_busy,

io_busy,

Logins,

Connections,

PacketsReceived,

PacketsSent,

PacketErrors,

Reads,

Writes,

RWErrors,

convert(money,TotalTransactions),

convert(money,UserDB1Trans),

convert(money, UserDB2Trans),

convert(money, UserDB3Trans),

convert(money, UserDB4Trans),

convert(money,TempdbTransactions),

FullScans,

IndexSearches,

convert(money,PageSplits),

LockRequests,

convert(money,LockTimeouts),

Deadlocks,

convert(money,LatchWaits),

convert(money,LockWaits),

Compilations,

WorkFiles,

WorkTables

from dbo.Growth_Statistics8to6

where TheDate >= getdate()-8

--

 

 

 

 

Posted by GrumpyOldDBA with 1 comment(s)

Creating Baselines for P & O part 2

The most important part of performance tuning is to be able to establish a baseline. Without a baseline and on-going statistics and trending information you just cannot make informed judgements concerning performance degredation or improvement or on future growth.

I have three standard sets of data I collect as a minimum, here is the second.

This concerns database growth, the cause of many a heated word I suspect when databases are volatile! It’s important to be able to see how databases are growing and to be able to predict growth for future capacity, this seems even more critical with a wider deployment of SANs where the aim is usually to achieve 85% or greater capacity utilisation. ( The whole question of SAN storage is another matter however )

I have a routine which populates a table once a day, the table calculates %age growth and I generate a report for easy viewing. The table is available for historical reporting. Like the filestats collection I make use of functions to do the maths for me, these are embedded in the table as calculated columns. There is also a stored procedure which must be placed within master as a system object.

Step 1 .. create the functions and table in your DBA database, called ServerAdmin here.

Step2 .. create the procedure sp_DatabaseSpace in the master database, mark this as a system procedure.

Step3 .. create a scheduled job to run the script shown below.

I have a separate routine which generates a report which I’m not covering here.

CREATE function dbo.fn_WeekEight (@p1 smalldatetime,@p2 numeric(10,2),@p3 sysname)

-- ============================================================= 

--  Function:                 fn_WeekEight                         

--  Written by:             Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk

--                                  (c) 6th Jan 2004                               

--                                                               

--  Purpose:            Populates a computed column in table DatabaseGrowth                                 

--                                                               

--  System:                   DBA maintenance

--

--  Input Paramters:     @p1 smalldatetime, @p2 numeric, @p3 sysname

--                                 

--  Returns :                 numeric                                      

--                                                               

--  Usage:              Computed column in table { select dbo.fn_WeekEight(@p1,@p2,@p3) }

--                                                               

--  Notes:                     Will automatically generate a %age change in database size based upon an 8 week period

--                                  Using functions in computed table columns allows results by a simple select

--                                 

--                                                               

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              6 jan - 2004     Initial Release                             

--  2                   14 jan - 2004   divide by zero error when database size unchanged clr       

-- ============================================================= 

returns numeric(10,2)

as

begin

            declare @Return numeric(10,2),@DBSize numeric(10,2)

            select @DBSize=sizemb          

            from dbo.DatabaseGrowth with (nolock)

            where convert(varchar,TheDate,112)=convert(varchar,dateadd(week,-8,@p1),112)

            and DatabaseName=@p3

-- have to cope with divide by zero problem -- rats! makes code less neat!!

            IF @p2=@DBSize

                        set @return=0

            ELSE

                        set @Return=100.00/(@DBSize/(@p2-@DBSize))

            --endif

return(@Return)

end

GO

 

CREATE function dbo.fn_WeekFour (@p1 smalldatetime,@p2 numeric(10,2),@p3 sysname)

-- ============================================================= 

--  Function:                 fn_WeekFour                         

--  Written by: Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk

--                                  (c) 6th Jan 2004                              

--                                                               

--  Purpose:            Populates a computed column in table DatabaseGrowth                                

--                                                               

--  System:                   DBA maintenance

--

--  Input Paramters:     @p1 smalldatetime, @p2 numeric, @p3 sysname

--                                 

--  Returns :                 numeric                                     

--                                                               

--  Usage:              Computed column in table { select dbo.fn_WeekFour(@p1,@p2,@p3) }

--                                                               

--  Notes:                     Will automatically generate a %age change in database size based upon an 4 week period

--                                  Using functions in computed table columns allows results by a simple select

--                                 

--                                                               

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              6 jan - 2004     Initial Release                             

--  2                   14 jan - 2004   divide by zero error when database size unchanged clr

-- ============================================================= 

returns numeric(10,2)

as

begin

            declare @Return numeric(10,2),@DBSize numeric(10,2)

            select @DBSize=sizemb          

            from dbo.DatabaseGrowth with (nolock)

            where convert(varchar,TheDate,112)=convert(varchar,dateadd(week,-4,@p1),112)

            and DatabaseName=@p3

-- have to cope with divide by zero problem -- rats! makes code less neat!!

            IF @p2=@DBSize

                        set @return=0

            ELSE

                        set @Return=100.00/(@DBSize/(@p2-@DBSize))

            --endif

return(@Return)

end

 

GO

 

CREATE function dbo.fn_WeekOne (@p1 smalldatetime,@p2 numeric(10,2),@p3 sysname)

-- ============================================================= 

--  Function:                 fn_WeekOne                          

--  Written by: Colin Leversuch-Roberts 

--                                  www.kelemconsulting.co.uk

--                                  (c) 6th Jan 2004                               

--                                                               

--  Purpose:            Populates a computed column in table DatabaseGrowth                                

--                                                               

--  System:                   DBA maintenance

--

--  Input Paramters:     @p1 smalldatetime, @p2 numeric, @p3 sysname

--                                 

--  Returns :                 numeric                                     

--                                                               

--  Usage:              Computed column in table { select dbo.fn_WeekOne(@p1,@p2,@p3) }

--                                                                

--  Notes:                     Will automatically generate a %age change in database size based upon a 1 week period

--                                  Using functions in computed table columns allows results by a simple select

--                                 

--                                                                

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              6 jan - 2004     Initial Release                             

--  2                   14 jan - 2004   divide by zero error when database size unchanged clr

-- ============================================================= 

returns numeric(10,2)

as

begin

            declare @Return numeric(10,2),@DBSize numeric(10,2)

            select @DBSize=sizemb          

            from dbo.DatabaseGrowth with (nolock)

            where convert(varchar,TheDate,112)=convert(varchar,dateadd(week,-1,@p1),112)

            and DatabaseName=@p3

-- have to cope with divide by zero problem -- rats! makes code less neat!!

            IF @p2=@DBSize

                        set @return=0

            ELSE

                        set @Return=100.00/(@DBSize/(@p2-@DBSize))

            --endif

return(@Return)

end

 

GO

 

CREATE function dbo.fn_WeekTwo (@p1 smalldatetime,@p2 numeric(10,2),@p3 sysname)

-- ============================================================= 

--  Function:                 fn_WeekTwo                         

--  Written by: Colin Leversuch-Roberts

--                                  www.kelemconsulting.co.uk

--                                  (c) 6th Jan 2004                                 

--                                                               

--  Purpose:            Populates a computed column in table DatabaseGrowth                                

--                                                                

--  System:                   DBA maintenance

--

--  Input Paramters:     @p1 smalldatetime, @p2 numeric, @p3 sysname

--                                 

--  Returns :                 numeric                                     

--                                                                

--  Usage:              Computed column in table { select dbo.fn_WeekTwo(@p1,@p2,@p3) }

--                                                               

--  Notes:                     Will automatically generate a %age change in database size based upon an 2 week period

--                                  Using functions in computed table columns allows results by a simple select

--                                 

--                                                               

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              6 jan - 2004     Initial Release                              

--  2                   14 jan - 2004   divide by zero error when database size unchanged clr

-- ============================================================= 

returns numeric(10,2)

as

begin

            declare @Return numeric(10,2),@DBSize numeric(10,2)

            select @DBSize=sizemb          

            from dbo.DatabaseGrowth with (nolock)

            where convert(varchar,TheDate,112)=convert(varchar,dateadd(week,-2,@p1),112)

            and DatabaseName=@p3

-- have to cope with divide by zero problem -- rats! makes code less neat!!

            IF @p2=@DBSize

                        set @return=0

            ELSE

                        set @Return=100.00/(@DBSize/(@p2-@DBSize))

            --endif

return(@Return)

end

 

GO

use ServerAdmin

go

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[DatabaseGrowth]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)

drop table [dbo].[DatabaseGrowth]

GO

 

CREATE TABLE [dbo].[DatabaseGrowth] (

            [DatabaseName] [sysname] NOT NULL ,

            [SizeMB] [numeric](10, 2) NOT NULL ,

            [TheDate] [smalldatetime] NOT NULL default getdate(),

            [WeekOne] AS ([dbo].[fn_WeekOne]([TheDate], [SizeMB], [DatabaseName])) ,

            [WeekTwo] AS ([dbo].[fn_WeekTwo]([TheDate], [SizeMB], [DatabaseName])) ,

            [WeekFour] AS ([dbo].[fn_WeekFour]([TheDate], [SizeMB], [DatabaseName])) ,

            [WeekEight] AS ([dbo].[fn_WeekEight]([TheDate], [SizeMB], [DatabaseName]))

) ON [PRIMARY]

GO

use master

go

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[sp_DatabaseSpace]') and OBJECTPROPERTY(id, N'IsProcedure') = 1)

drop procedure [dbo].[sp_DatabaseSpace]

GO

CREATE procedure dbo.sp_DatabaseSpace

-- ============================================================================ 

--  Stored Procedure:   sp_DatabaseSpace                         

--  Written by: Colin Leversuch-Roberts 

--                      www.kelemconsulting.co.uk

--                                  (c) 18th November 2003                              

--                                                               

--  Purpose:            List the space used and free for a database

--                                  ( similar to sp_spaceused but better!! )

--                                                                

--  System:                   has to live in master database

--

--  Input Paramaters    none

--

--  Output Parameters:            None                                     

--  Return Status:         None                                     

--                                                                

--  Usage:                    Exec dbo.sp_DatabaseSpace

--                                                                        

--  Calls:                       n/a

--

--  Notes:                     This procedure has to live in the master database

--                                  It should then be called from the database to be reported upon

--                                  However, after compilation this procedure must be made a system object

--                                  to allow it to act just like a system procedure

--                                  use sp_MS_marksystemobject 'sp_DatabaseSpace'

--                                  This procedure is intended to be run within a scripted web assistant task

--                                  using the following script:-

--                                  create table #table1(c1 char(35),c2 char(15),c3 char(15),c4 char(15),c5 char(15),c6 char(15))

--                                  insert into #table1  exec dbo.sp_msforeachdb @command1='use "?";exec sp_DatabaseSpace '

--                                  select * from #table1

--                                                               

--  VERSION HISTORY

--  Version No               Date                             Description

--  1                              18th Nov 2003             Initial Release

--=======================================================================================

 

as

set nocount on

declare @dbsize dec(15,0)

declare @logsize dec(15)

declare @bytesperpage           dec(15,0)

declare @pagesperMB dec(15,0)

declare @size int

declare @used int

declare @logfree varchar(12)

-- log details held in sysperfinfo

select @size=cntr_value from master.dbo.sysperfinfo with (nolock)where instance_name=db_name() and counter_name='Log File(s) Size (KB)'

select @used=cntr_value from master.dbo.sysperfinfo with (nolock)where instance_name=db_name() and counter_name='Log File(s) Used Size (KB)'

select @logfree=convert(varchar,(@size-@used)/1024)+' MB'

--

select @dbsize = sum(convert(dec(15),size)) from dbo.sysfiles with (nolock)where (status & 64 = 0)

select @logsize = sum(convert(dec(15),size)) from dbo.sysfiles with (nolock)where (status & 64 <> 0)

select @bytesperpage = low from master.dbo.spt_values with (nolock) where number = 1 and type = 'E'

select @pagesperMB = 1048576 / @bytesperpage

-- output the results

select  db_name() as 'Database Name',

            ltrim(str((@dbsize + @logsize) / @pagesperMB,15,2) + ' MB') as 'Database Size',

            ltrim(str(@dbsize/ @pagesperMB,15,2)+' MB') as 'Data Size',

            ltrim(str(@logsize/ @pagesperMB,15,2)+' MB') as 'Log Size',

            @logfree as 'Free Log',

            'Free Data'=ltrim(str((@dbsize -(select sum(convert(dec(15),reserved))

            from sysindexes with (nolock)

            where indid in (0, 1, 255))) / @pagesperMB,15,2)+ ' MB')

GO

exec dbo.sp_ms_marksystemobject sp_DatabaseSpace

go

script to populate the table:-

create table #table1(c1 char(35),c2 char(15),c3 char(15),c4 char(15),c5 char(15),c6 char(15))       

insert into #table1      

exec dbo.sp_msforeachdb @command1=''use "?";exec dbo.sp_DatabaseSpace ''   

insert into ServerAdmin.dbo.DatabaseGrowth(DatabaseName,SizeMB)   

select c1,convert(numeric(10,2),replace(c3,''MB'',''''))-convert(numeric(10,2),replace(c6,''MB'','''')) from #table1    order by c1 

Posted by GrumpyOldDBA with no comments

P & O scripts

Here's a couple of simple scripts that are part of my tool set.
When I'm looking at Production problems I have a couple of scripts I always use, I don't have custom procedures based upon sp_who2 as they lack the flexibility that's required when troubleshooting a live system.
I should say that at most client sites I've also had the benefit of SQL Diagnostic Manager running in the background, I've used this tool since around 1999 and very good it is too!
So I have a script to examine processes, the output has been censored to protect the innocent but is a real production output.In use I vary the where clause, which is where this script is better than a proc, I might set a particular database or a troublesome com server connection. There's plenty written about the output from sysprocesses so i don't think I need to waffle on about it here Big Smile  The script is written to clearly show open transactions, waits and blocked/blocking processes

select right(convert(varchar,last_batch),7) as "time",left(nt_username,20) as NTuser,spid,open_tran as "tran",kpid,blocked,waittype,waittime
,lastwaittype,waitresource,db_name(dbid) as DB,cpu,physical_io,memusage,status
,hostname,program_name,cmd,loginame from dbo.sysprocesses with (nolock) where dbid>4
order by blocked desc,waittime desc,[tran] desc

 

  My other script is one to examine the contents of the procedure cache. I'm particularly interested in the setopts value which may indicate performance problems with procedure plan use.
Using this script I have been able to find a single web server from a web farm which had been set to the incorrect regional settings. I've also been able to find inconsistent connections strings and incorrect SET options for object compilation.
OK, so the script doesn't actually tell you all this but you can take the clause to detect the culprits.
If you're now totally lost and haven't a clue what I'm talking about let me explain a little more...
The setopts value in the query can be evaluated to extract the SET options in place for the session, different SET options for the same procedure will create multiple plans, two for each "SET" options, ( a single and a parallel version ) .. there's plenty of excellent articles about how to interpret information from the procedure cache, I'm only interested in the SET options here.
So in an ideal world you would hope to see the same value for all your queries/plans.. if you don't, then you need to sort out the SET options in place.
There's actually a lot of information you can extract but I figure most other authors have covered that, I feel that less has been mentioned about inconsistant session settings.
Again this has been censored to show output .. I couldn't find a really interesting display .. sorry.

 

Posted by GrumpyOldDBA with 3 comment(s)
Filed under:

Creating Baselines for P & O

The most important part of performance tuning is to be able to establish a baseline. Without a baseline and on-going statistics and trending information you just cannot make informed judgements concerning performance degredation or improvement or on future growth.

I have three standard sets of data I collect as a minimum, here is the first.

There's plenty of articles around concerning the use of fn_virtualfilestats so I'm going to concentrate on how I use this function and the data it generates.

Here's a typical call and result set:-

select * from ::fn_virtualfilestats (11,-1)

DbId   FileId TimeStamp   NumberReads          NumberWrites         BytesRead            BytesWritten         IoStallMS            
------ ------ ----------- -------------------- -------------------- -------------------- -------------------- -------------------- 
11     1      179531484   58                   798                  2375680              7946240              389
11     2      179531484   8                    10160                337408               11038720             0

I collect data every hour using a scheduled job and store it in a table
Here's a typical subset of data

time              dbid fileid reads                writes               iostall              
----------------- ---- ------ -------------------- -------------------- -------------------- 
Nov  2 2006  3:00 11   1      407245               71259                1166859
Nov  2 2006  3:00 11   2      10874                327285               34415
Nov  2 2006  4:00 11   1      407253               71259                1166984
Nov  2 2006  4:00 11   2      10874                327285               34415
Nov  2 2006  5:00 11   1      407263               71259                1167046
Nov  2 2006  5:00 11   2      10874                327285               34415
Nov  2 2006  6:00 11   1      407263               71259                1167046
Nov  2 2006  6:00 11   2      10874                327285               34415
Nov  2 2006  7:00 11   1      407357               71259                1167358
Nov  2 2006  7:00 11   2      10874                327285               34415
Nov  2 2006  8:00 11   1      407477               71259                1167700
Nov  2 2006  8:00 11   2      10874                327285               34415
Nov  2 2006  9:00 11   1      407591               71259                1167965
Nov  2 2006  9:00 11   2      10874                327285               34415
Nov  2 2006 10:00 11   1      407614               71259                1168090

Trouble is that the numbers are cumlative and thus I have to do subtraction to obtain the figures for the hour.
Well in principle this sounds easy but in reality if you're collecting data from say, 14 databases every hour, the query to extract that difference and the hour in which it occurred is "complex"
So I take the easy way out and use calculated columns with functions doing the maths, this gives me reuslts similar to this.

time              dbid fileid reads                writes               iostall              hourreads            hourwrites           
----------------- ---- ------ -------------------- -------------------- -------------------- -------------------- -------------------- 
Oct 13 2006  2:28 11   1      101620751            27122598             846102670            NULL                 NULL
Oct 13 2006  2:28 11   2      3042305              70473357             432497               NULL                 NULL
Oct 13 2006  4:00 11   1      101699390            27143997             846966601            43345                11543
Oct 13 2006  4:00 11   2      3042559              70551758             432513               183                  52881
Oct 13 2006  5:00 11   1      101727442            27155013             847248234            28052                11016
Oct 13 2006  5:00 11   2      3042618              70588701             432513               59                   36943
Oct 13 2006  6:00 11   1      101760009            27164425             847612578            32567                9412
Oct 13 2006  6:00 11   2      3042668              70618819             432529               50                   30118
Oct 13 2006  7:00 11   1      101779677            27172467             847779295            19668                8042
Oct 13 2006  7:00 11   2      3042717              70660817             432529               49                   41998
Oct 13 2006  9:00 11   1      101824133            27187730             848222714            12645                7634
Oct 13 2006  9:00 11   2      3042801              70725796             432529               41                   31570

I have a job which publishes this data out to a web page every day and rolls up the previous weeks data into averages to give an overall view of i/o.

Points:-

  • I sample every hour, generally this is fine, for heavy diagnostic work I might actually sample more often, say 10 mins.
  • This type of data collection can help to pinpoint databases and times under load, if you have multiple files then you'll be able to see if the load on the files is balanced.
  • Conversely it can help finding the quiet times to assist in locating maintenence windows in 24 x 7 systems.
  • Once you have a few months of data the analysis can become more interesting.
  • Functions in tables generally don't give good performance, however it does make things more simple.
  • An index on column TheDate will aid query performance
  • Ideally the data would be better extracted to a cube.
  • Notwithstanding the performance hit you can do some really cool things with functions as calculated columns, including joins and/or lookups to other tables.
  • I wouldn't recommend this for high performance oltp databases but for reporting and such it is really very handy. I've been using this technique since sql 2000 was released.

To populate the table I have a job set to run hourly which would have a step such as this:-

--
insert into ServerAdmin.dbo.Filestats(DbID,FileID,StampTime,Reads,Writes,BytesRead,BytesWrite,ioStall)
select * from ::fn_virtualfilestats (7,-1)
--
insert into ServerAdmin.dbo.Filestats(DbID,FileID,StampTime,Reads,Writes,BytesRead,BytesWrite,ioStall)
select * from ::fn_virtualfilestats (10,-1)
--
insert into ServerAdmin.dbo.Filestats(DbID,FileID,StampTime,Reads,Writes,BytesRead,BytesWrite,ioStall)
select * from ::fn_virtualfilestats (11,-1)
--
insert into ServerAdmin.dbo.Filestats(DbID,FileID,StampTime,Reads,Writes,BytesRead,BytesWrite,ioStall)
select * from ::fn_virtualfilestats (2,-1)
--

I keep this simple, no generic or dynamic coding, note that I always collect stats for Tempdb.

Here's part of a typical report and the code for the tables and functions, the functions must be created prior to the table creation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Table:-

if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[FileStats]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[FileStats]
GO

CREATE TABLE [dbo].[FileStats] (
	[NumKey] [int] IDENTITY (1, 1) NOT NULL ,
	[TheDate] [datetime] NULL ,
	[DbID] [tinyint] NULL ,
	[FileID] [tinyint] NULL ,
	[StampTime] [bigint] NULL ,
	[Reads] [bigint] NULL ,
	[Writes] [bigint] NULL ,
	[BytesRead] [bigint] NULL ,
	[BytesWrite] [bigint] NULL ,
	[ioStall] [bigint] NULL ,
	[DbName] AS (db_name([DbID])) ,
	[HourWrites] AS ([dbo].[fn_FileStats_Write]([TheDate], [dbid], [fileid], [Writes])) ,
	[HourReads] AS ([dbo].[fn_FileStats_Read]([TheDate], [dbid], [fileid], [Reads])) ,
	[HourIo] AS ([dbo].[fn_FileStats_IOStall]([TheDate], [dbid], [fileid], [iostall])) ,
	[AvIoStall] AS ([dbo].[fn_FileStats_AvIoStall2]([TheDate], [dbid], [fileid], [Reads], [Writes], [ioStall])) 
) ON [PRIMARY]
GO

ALTER TABLE [dbo].[FileStats] ADD 
	CONSTRAINT [DF__FileStats__TheDate] DEFAULT (getdate()) FOR [TheDate],
	CONSTRAINT [uk_FileStats_Numkey] UNIQUE  NONCLUSTERED 
	(
		[NumKey]
	)  ON [PRIMARY] 
GO
--
--
--

Functions:- -- -- CREATE function [dbo].[fn_FileStats_Write] (@p1 smalldatetime,@p2 tinyint,@p3 tinyint,@p4 bigint) -- ============================================================= -- Function: fn_FileStats_Write -- Written by: Colin Leversuch-Roberts -- www.kelemconsulting.co.uk -- (c) 6th Jan 2004 -- -- Purpose: Populates a computed column in table FileStats -- -- System: DBA maintenance -- -- Input Paramters: @p1 smalldatetime, @p2 tinyint, @p3 tinyint, @p4 bigint -- row timestamp, database id, file id, number of writes -- -- Returns : bigint -- -- Usage: Computed column in table { select dbo.fn_FileStats_Write(@p1,@p2,@p3,@p4) } -- -- Notes: This function calculates the number of writes recorded against the file id -- for the last hour. The writes, reads and i/o stall figures are cumlative hence -- the calulations required to work out the figures -- -- VERSION HISTORY -- Version No Date Description -- 1 6 jan - 2004 Initial Release -- ============================================================= returns bigint as begin declare @Return bigint,@Writes bigint select @Writes=Writes from dbo.filestats with (nolock) where convert(char(15),theDate,113)+'00'=convert(char(15),dateadd(hh,-1,@p1),113)+'00' and dbid=@p2 and fileid=@p3 IF @p4=@Writes set @return=0 ELSE set @Return=abs(@p4-@Writes) --endif return(@Return) end -- -- -- CREATE function [dbo].[fn_FileStats_Read] (@p1 smalldatetime,@p2 tinyint,@p3 tinyint,@p4 bigint) -- ============================================================= -- Function: fn_FileStats_Read -- Written by: Colin Leversuch-Roberts -- www.kelemconsulting.co.uk -- (c) 6th Jan 2004 -- -- Purpose: Populates a computed column in table FileStats -- -- System: DBA maintenance -- -- Input Paramters: @p1 smalldatetime, @p2 tinyint, @p3 tinyint, @p4 bigint -- row timestamp, database id, file id, number of writes -- -- Returns : bigint -- -- Usage: Computed column in table { select dbo.fn_FileStats_Read(@p1,@p2,@p3,@p4) } -- -- Notes: This function calculates the number of reads recorded against the file id -- for the last hour. The writes, reads and i/o stall figures are cumlative hence -- the calulations required to work out the figures -- -- VERSION HISTORY -- Version No Date Description -- 1 6 jan - 2004 Initial Release -- ============================================================= returns bigint as begin declare @Return bigint,@Reads bigint select @Reads=Reads from dbo.filestats with (nolock) where convert(char(15),theDate,113)+'00'=convert(char(15),dateadd(hh,-1,@p1),113)+'00' and dbid=@p2 and fileid=@p3 -- have to cope with divide by zero problem -- rats! makes code less neat!! IF @p4=@Reads set @return=0 ELSE set @Return=abs(@p4-@Reads) --endif return(@Return) end -- -- -- CREATE function [dbo].[fn_FileStats_IOStall] (@p1 smalldatetime,@p2 tinyint,@p3 tinyint,@p4 bigint) -- ============================================================= -- Function: fn_FileStats_IOStall -- Written by: Colin Leversuch-Roberts -- www.kelemconsulting.co.uk -- (c) 6th Jan 2004 -- -- Purpose: Populates a computed column in table FileStats -- -- System: DBA maintenance -- -- Input Paramters: @p1 smalldatetime, @p2 tinyint, @p3 tinyint, @p4 bigint -- row timestamp, database id, file id, number of writes -- -- Returns : bigint -- -- Usage: Computed column in table { select dbo.fn_FileStats_IOStall(@p1,@p2,@p3,@p4) } -- -- Notes: This function calculates the iostall in ms recorded against the file id -- for the last hour. The writes, reads and i/o stall figures are cumlative hence -- the calulations required to work out the figures -- -- VERSION HISTORY -- Version No Date Description -- 1 6 jan - 2004 Initial Release -- ============================================================= returns bigint as begin declare @Return bigint,@IOStall bigint select @IOStall=IOStall from dbo.filestats with (nolock) where convert(char(15),theDate,113)+'00'=convert(char(15),dateadd(hh,-1,@p1),113)+'00' and dbid=@p2 and fileid=@p3 -- have to cope with divide by zero problem -- rats! makes code less neat!! IF @p4=@IOStall set @return=0 ELSE set @Return=abs(@p4-@IOStall) --endif return(@Return) end -- -- -- CREATE function dbo.fn_FileStats_AvIoStall2 (@p1 datetime,@p2 tinyint, @p3 tinyint,@p4 bigint,@p5 bigint,@p6 bigint) -- ============================================================= -- Function: fn_FileStats_AvIoStall2 -- Written by: (c) Colin Leversuch-Roberts -- www.kelemconsulting.co.uk -- -- Purpose: to produce an average time in ms for the file io -- -- System: DBA maintenance -- -- Input Paramters: @p1 datetime, @p2 tinyint, @p3 tinyint,@p4 bigint, @p5 bigint, @p6 bigint -- datetime , database id, file id, reads, writes, io stall -- -- Returns : decimal with 4 decimal points -- -- Usage: select dbo.fn_FileStats_AvIoStall2(@p1,@p2,@p3,@p4,@p5,@p6) -- -- Notes: function to calculate the average i/o stall per i/o for the last hour -- Can't do a direct function on the calculated columns as you -- can't use computed columns in a computed column, so do this way. -- -- VERSION HISTORY -- Version No Date Description -- 1 25-aug-2005 Initial Release -- -- ============================================================= returns decimal(9,4) as begin declare @reads bigint,@writes bigint,@iostall bigint,@return decimal(9,4) -- select @reads = dbo.fn_FileStats_Read(@p1,@p2,@p3,@p4) select @writes = dbo.fn_FileStats_Write(@p1,@p2,@p3,@p5) select @iostall = dbo.fn_FileStats_IOStall(@p1,@p2,@p3,@p6) -- IF @reads=0 and @writes=0 set @return = 0 ELSE set @return = abs(((@iostall*1.0)/(@reads+@writes))) --endif -- return(@return) end -- -- PS .. still struggling with creating blog entries , please accept my apologies for poor formatting!!

Posted by GrumpyOldDBA with no comments

Best Practice - Defaults on columns

I have an extensive document on Best Practices, from the DBA perspective, this is one which I don't think made it in.

I'm working with some versioning and scripting tools and the definition of column defaults in table creation scripts is giving me some pain.

If you use the method below, each time the table is created it generates a new key for the constraint, same applies to Foreign Keys and Primary Keys .. So ... DON'T DO IT !!!!  it creates havoc when comparing databases and handling versioning. Naming your constraints, the second table script, is a much more DBA friendly method.

-- not good !!

CREATE TABLE [dbo].[test2] (

[numkey] [int] NOT NULL ,

[thedate] [datetime] NULL default getdate()

) ON [PRIMARY]

GO

-- Produces  when scripted

CREATE TABLE [dbo].[test2] (

[numkey] [int] NOT NULL ,

[thedate] [datetime] NULL

) ON [PRIMARY]

GO

ALTER TABLE [dbo].[test2] ADD

CONSTRAINT [DF__test2__thedate__2057CCD0] DEFAULT (getdate()) FOR [thedate]

GO

-- That's the way to do it !!

CREATE TABLE [dbo].[test2] (

[numkey] [int] NOT NULL ,

[thedate] [datetime] NULL constraint DF_test2_thedate default getdate()

) ON [PRIMARY]

GO

-- Produces when scripted

CREATE TABLE [dbo].[test2] (

[numkey] [int] NOT NULL ,

[thedate] [datetime] NULL

) ON [PRIMARY]

GO

ALTER TABLE [dbo].[test2] ADD

CONSTRAINT [DF_test2_thedate] DEFAULT (getdate()) FOR [thedate]

GO

Posted by GrumpyOldDBA with 1 comment(s)
Filed under:

Just a test!

 

 

 

 

 

 

Posted by GrumpyOldDBA with no comments