Reading a large SQL Errorlog

I came across an interesting situation recently where a SQL instance had been configured with the Audit of successful and failed logins being written to the errorlog.

ie

image

This meant… every time a user or the application connected to the SQL instance – an entry was written to the errorlog.

This meant…  huge SQL Server errorlogs.

Opening an errorlog in the usual way, using SQL management studio, was extremely slow…

Luckily, I was able to use xp_readerrorlog to work around this – here’s some example queries..

 

To show errorlog entries from the currently active log, just for today :-


DECLARE @now DATETIME

DECLARE @midnight DATETIME

SET @now = GETDATE()

SET @midnight =  DATEADD(d, DATEDIFF(d, 0, getdate()), 0)

EXEC xp_readerrorlog 0,1,NULL,NULL,@midnight,@now

 

To find out how big the current errorlog actually is, and what the earliest and most recent entries are in the errorlog :-


CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000))

INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog
SELECT COUNT(*) AS 'Number of entries in errorlog', MIN(logdate) AS 'ErrorLog Starts', MAX(logdate) AS 'ErrorLog Ends'
FROM #temp_errorlog

DROP TABLE #temp_errorlog

To show just DBCC history  information in the current errorlog :-

EXEC xp_readerrorlog 0,1,'dbcc'

 

To show backup errorlog entries in the current errorlog :-

CREATE TABLE #temp_errorlog (Logdate DATETIME, ProcessInfo VARCHAR(20),Text VARCHAR(4000))

INSERT INTO #temp_errorlog EXEC xp_readerrorlog 0 -- for current errorlog
SELECT * from #temp_errorlog WHERE ProcessInfo = 'Backup' ORDER BY Logdate

DROP TABLE #temp_errorlog

XP_Errorlog is an undocumented system stored procedure – so no official Microsoft link describing the parameters it takes – however,  there’s a good blog on this here

And, if you do have a problem with huge errorlogs – please consider running system stored procedure  sp_cycle_errorlog on a nightly or regular basis.  But if you do this,  remember to change the amount of errorlogs you do retain – the default of 6 might not be sufficient for you….

Posted by steveh99999 | with no comments
Filed under:

Reducing Deadlocks - not a DBA issue ?

 

As a DBA, I'm involved on an almost daily basis troubleshooting 'SQL Server' performance issues. Often, this troubleshooting soon veers away from a 'its a SQL Server issue' to instead become a wider application/database design/coding issue.

One common perception with SQL Server is that deadlocking is an application design issue - and is fixed by recoding...  I see this reinforced by MCP-type questions/scenarios where the answer to prevent deadlocking is simply to change the order in code in which tables are accessed....

Whilst this is correct, I do think this has led to a situation where many 'operational' or 'production support' DBAs, when faced with a deadlock, are happy to throw the issue over to developers without analysing the issue further....

A couple of 'war stories' on deadlocks which I think are interesting :- 

Case One , I had an issue recently on a third-party application that I support on SQL 2008. 

This particular third-party application has an unusual support agreement where the customer is allowed to change the index design on the third-party provided database. 

However, we are not allowed to alter application code or modify table structure..

This third-party application is also known to encounter occasional deadlocks – indeed, I have documentation from the vendor that up to 50 deadlocks per day is not unusual !

So, as a DBA I have to support an application which in my opinion has too many deadlocks - but, I cannot influence the design of the tables or stored procedures for the application.

This should be the classic - blame the third-party developers scenario, and hope this issue gets addressed in a future application release - ie we could wait years for this to be resolved and implemented in our production environment...

But, as DBAs  can change the index layout, is there anything I could do still to reduce the deadlocks in the application ?

I initially used SQL traceflag 1222 to write deadlock detection output to the SQL Errorlog – using this I was able to identify one table heavily involved in the deadlocks.

When I examined the table definition, I was surprised to see it was a heap – ie no clustered index existed on the table.

Using SQL profiler to see locking behaviour and plan for the query involved in the deadlock, I was able to confirm a table scan was being performed.

By creating an appropriate clustered index - it was possible to produce a more efficient plan and locking behaviour.

So, less locks, held for less time = less possibility of deadlocks.

I'm still unhappy about the overall number of deadlocks on this system - but that's something to be discussed further with the vendor.

Case Two,  a system which hadn't changed for months suddenly started seeing deadlocks on a regular basis. I love the 'nothing's changed' scenario, as it gives me the opportunity to appear wise and say 'nothings changed on this system, except the data'.. 

This particular deadlock occurred on a table which had been growing rapidly. By using DBCC SHOW_STATISTICS - the DBA team were able to see that the deadlocks seemed to be occurring shortly after auto-update stats had regenerated the table statistics using it's default sampling behaviour.

As a quick fix, we were able to schedule a nightly UPDATE STATISTICS WITH FULLSCAN on the table involved in the deadlock - thus, greatly reducing the potential for stats to be updated via auto_update_stats, consequently reducing the potential for a bad plan to be generated based on an unrepresentative sample of the data. This reduced the possibility of a deadlock occurring.  

Not a perfect solution by any means, but quick, easy to implement, and needed no application code changes. 

This fix gave us some 'breathing space'  to properly fix the code during the next scheduled application release.

 

 The moral of this post - don't dismiss deadlocks as issues that can only be fixed by developers...

Posted by steveh99999 | with no comments
Filed under: ,

Database Backup History From MSDB in a pivot table

I knocked up a nice little query to display backup history for each database in a pivot table format.

I wanted to display the most recent full, differential, and transaction log backup for each database.

Here's the SQL :-

WITH backupCTE AS (SELECT name, recovery_model_desc, d AS 'Last Full Backup', i AS 'Last Differential Backup', l AS 'Last Tlog Backup' FROM

( SELECT db.name, db.recovery_model_desc,type, backup_finish_date

FROM master.sys.databases db

LEFT OUTER JOIN msdb.dbo.backupset a

ON a.database_name = db.name

WHERE db.state_desc = 'ONLINE'

) AS Sourcetable  

PIVOT

(MAX (backup_finish_date) FOR type IN (D,I,L) ) AS MostRecentBackup )

SELECT * FROM backupCTE

 Gives output such as this :- 

 

With this query, I can then build up some straightforward queries to ensure backups are scheduled and running as expected -

For example, the following logic can be used ;- 

- WHERE [Last Full Backup] IS NULL) - ie database has never been backed up..

- WHERE [Last Tlog Backup] < DATEDIFF(mm,GETDATE(),-60) AND recovery_model_desc <> 'SIMPLE') - transction log not backed up in last 60 minutes.

WHERE [Last Full Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Differential Backup] < [Last Full Backup]) -- no backup in last day.

- WHERE [Last Differential Backup] < DATEDIFF(dd,GETDATE(),-1) AND [Last Full Backup] < DATEDIFF(dd,GETDATE(),-8) ) -- no differential backup in last day when last full backup is over 8 days old.

 

 

 

Posted by steveh99999 | 2 comment(s)
Filed under:

SSAS - What is running ?

 

I saw a tweet today from respected blogger john sansom asking how a SQL DBA should attempt to performance tune SSAS.

 

I've been through a similar scenario - and one of the tools I developed as part of my troubleshooting was my own 'SSAS version' of sp_who.

 

I've written a stored procedure which I run on a SQL Server instance, that connects to an SSAS instance via a linked server, I then wrote an SSRS report to use this stored proc to allow developers and support staff access to SSAS cube activity.

 

 It's quite a basic/simple solution but it does help expose what is running - I must say on a fairly low-use SSAS server. I don't know how well this scales on a heavily used SSAS implementation.

 

To add an SSAS  linked server first of all to your SQL Server..

 

 EXEC master.dbo.sp_addlinkedserver @server = N'SSAS_LINKED_SERVER', @provider=N'MSOLAP', @datasrc=N'ISSAS ServerName'

 

EXEC master.dbo.sp_addlinkedsrvlogin @rmtsrvname=N'SSAS_LINKED_SERVER',@useself=N'False',@locallogin=NULL,@rmtuser=N'Account to run command via' ,@rmtpassword='########'

 

Then I created a stored procedure :-

 

 CREATE PROC [dbo].[SSAS_whoIsActive]

AS

BEGIN

 

SELECT ssasSessions.SESSION_SPID AS [SPID],

ssasSessions.SESSION_USER_NAME AS [User ID],

ssasSessions.SESSION_CURRENT_DATABASE AS [Database],

ssasConnections.connection_host_name AS 'Client IP address',

ROUND(CAST(ssasCommands.COMMAND_CPU_TIME_MS AS int)/1000, 2) AS [Command Time(in sec)],

ssasCommands.COMMAND_START_TIME AS [Command Start Time],

ssasCommands.COMMAND_END_TIME AS [Command End Time],

ssasCommands.COMMAND_TEXT AS [MDX Command],

connection_host_application AS 'Application',

CHARINDEX('</Process>',ssasCommands.COMMAND_TEXT) AS ProcessRunning

FROM OPENQUERY(SSAS_LINKED_SERVER,'select * from $system.discover_sessions') AS ssasSessions

LEFT JOIN OPENQUERY(SSAS_LINKED_SERVER,'select * from $system.discover_commands') AS ssasCommands ON ssasSessions.SESSION_SPID = ssasCommands.SESSION_SPID

LEFT JOIN OPENQUERY(SSAS_LINKED_SERVER,'select * from $system.discover_connections') AS ssasConnections ON ssasSessions.SESSION_connection_id = ssasConnections.connection_id

ORDER BY [Command Start Time] DESC

END

 

I use the 'ProcessRunning' column to idenitfy if a cube process command is running, and then display this MDX command in a different colour (red) in my report.

 

This can be done by adding an expression to the report 'MDX Command' text box properties.  =IIF(Fields!ProcessRunning.Value > 0, "Red", "Black")

The MDX command column can sometimes be very large in this report output - it can be quite a shock for a DBA to see how complicated MDX generated by SSAS sometimes can be ! I usually change the font size on this output column to be 8pt.

Note my sproc is named in honour of the famous sp_whoIsActive written by Adam Machanic

 

 

 

 

Posted by steveh99999 | with no comments
Filed under: ,

Auto Update Stats change in SQL 2008 R2 sp 1

I came across a very interesting post via Brent Ozar’s weekly news email – ( I strongly recommend you go to  Brent's excellent website and subscribe to the weekly update)

The post states that a new trace flag had been made available in SQL 2008 R2 sp1 to change auto-update stats behaviour.  Using the new 2371 trace flag, SQL Server will now run auto-update-stats much more aggressively. ie at a much lower threshold than the previous default of 20% change.

Although this post relates to SAP – I think for many SQL DBAs this could be a very relevant trace flag. I have certainly encountered many situations where this would have been a useful option.

I also hope Microsoft consider introducing this for SQL 2008 ?

As always, any change to a production SQL Server needs to be well-tested before deploying – but this does look worthy of further investigation…

Posted by steveh99999 | with no comments

Denali / SSAS Free Training in Reading–Expedition Denali workshop

I came across  this Microsoft Training event being held in Reading at the end of June – sounds excellent….  2 days free, intensive training !

Hope to see you there..

Posted by steveh99999 | with no comments
Filed under:

Do you send output from sys.dm_exec_query_stats to your developers ?

A few years ago, I had the good fortune to work on an engagement with Kevin McPherrin of Microsoft. Kevin is one of those Microsoft employees that you never hear about, but is an absolute expert on DBA good-practice and SQL advice.

One really useful tip he gave me was to regularly run reports on my production servers, to email details of heavily-used and badly performing stored procedures to your developers… the DMV sys.dm_exec_query_stats can be queried for this summary information.

Now, I send out reports every night, using SSRS, such as this :-

image

See how often ‘proc A’ is run – and also that the oldest entry in procedure cache is only 6 hours old…

Using this information, developers can focus their attention on optimising the most heavily-used stored procedures in a system.

The query used to produce this report can easily be altered to show what procedures use the most CPU…

image

Interestingly now that although stored procedure A is run over 25 million timesin 5 and a half hours – there are other stored procedures that use a lot more CPU.

For example, Proc H uses more CPU, even though it has only been run 32 times. And note the CPU usage from this proc only relates to the last hour and a half..

I find these reports are excellent for focusing developer attention on production problems.

 

The SQL used to produce a report of procedures using the highest average logical IO per execution :-

 

SELECT  TOP 20 RANK() OVER(ORDER BY  (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count desc) AS 'Rank',
        qs.execution_count AS 'Executions',
        qs.last_execution_time,
        qs.max_elapsed_time/1000 AS 'Max Duration ms',
         qs.total_elapsed_time/1000 AS 'Total Duration ms',
        (qs.total_elapsed_time/(qs.execution_count*1000)) AS [Avg Duration ms],
        qs.total_worker_time/1000 AS 'Total CPU ms', (qs.total_worker_time/(qs.execution_count*1000)) AS 'Avg CPU ms',
        qs.max_worker_time/1000 AS 'Max CPU ms',
        qs.total_logical_reads + qs.total_physical_reads AS 'Total Reads', qs.total_logical_writes AS 'TotalWrites',
        (qs.total_logical_reads + qs.total_logical_writes) /qs.execution_count AS [Avg IO],
         qs.max_logical_reads AS 'Max logical reads',
         qs.max_logical_writes AS 'Max logical writes',
            SUBSTRING(qt.text,qs.statement_start_offset/2,
            (CASE WHEN qs.statement_end_offset = -1
            THEN LEN(CONVERT(nvarchar(MAX), qt.text)) * 2
            ELSE qs.statement_end_offset END -qs.statement_start_offset)/2)
        AS query_text,
        dbname=DB_NAME(qt.dbid),
        OBJECT_NAME(qt.objectid,qt.dbid) AS 'object',
        qs.creation_time AS 'plan creation time'
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS qt
WHERE object_name(qt.objectid,qt.dbid) IS NOT NULL

Change the text in red to produce a report based on a different order.

I usually include 4 subreports in one email :-

- top 20 most executed stored procedures.

- top 20 highest total CPU stored procedures.

- top 20 highest average logical IOs stored procedures.

- top 20 highest maximum duration stored procedures.

Be aware of the limitations of the sys.dm_exec_query_stats view – for example, it only reports on stored procedures with plans still in cache – but I find this a cheap and easy way of exposing production performance data to a wider audience.

Posted by steveh99999 | 2 comment(s)
Filed under:

The Exceptional EXCEPT clause

Ok, I exaggerate, but it can be useful…

I came across some ‘poorly-written’ stored procedures on a SQL server recently, that were using sp_xml_preparedocument.

Unfortunately these procs were  not properly removing the memory allocated to XML structures – ie they were not subsequently calling sp_xml_removedocument…

I needed a quick way of identifying on the server how many stored procedures this affected..

Here’s what I used..


EXEC sp_msforeachdb 'USE ?
SELECT DB_NAME(),OBJECT_NAME(s1.id)
FROM syscomments s1
WHERE [text] LIKE ''%sp_xml_preparedocument%''
EXCEPT
SELECT DB_NAME(),OBJECT_NAME(s2.id)
FROM syscomments s2
WHERE [text] LIKE ''%sp_xml_removedocument%'' ‘

There’s three nice features about the code above…

1. It uses sp_msforeachdb. There’s a nice blog on this statement here

2. It uses the EXCEPT clause.  So in the above query I get all the procedures which include the sp_xml_preparedocument string, but by using the EXCEPT clause I remove all the procedures which contain sp_xml_removedocument.  Read more about EXCEPT here

3. It can be used to quickly identify incorrect usage of sp_xml_preparedocument. Read more about this here

The above query isn’t perfect – I’m not properly parsing the SQL text to ignore comments for example - but for the quick analysis I needed to perform, it was just the job…

Posted by steveh99999 | 1 comment(s)
Filed under: ,

Free Microsoft one day BI Seminar in Reading

A colleague passed details of this event on to me this morning. Sounds interesting…

Free, so if you are interested, register quickly – I imagine this will soon fill up….

Posted by steveh99999 | 1 comment(s)
Filed under:

The overlooked OUTPUT clause

I often find myself applying ad-hoc data updates to production systems – usually running scripts written by other people. One of my favourite features of SQL syntax is the OUTPUT clause – I find this is rarely used, and I often wonder if this is due to a lack of awareness of this feature..

The OUTPUT clause was added to SQL Server in the SQL 2005 release – so has been around for quite a while now, yet I often see scripts like this…

SELECT somevalue FROM sometable WHERE keyval = XXX

UPDATE sometable

SET somevalue = newvalue

WHERE keyval = XXX

-- now check the update has worked…

SELECT somevalue FROM sometable WHERE keyval = XXX

This can be rewritten to achieve the same end-result using the OUTPUT clause.

UPDATE sometable

SET somevalue = newvalue

OUTPUT deleted.somevalue AS ‘old value’,

             inserted.somevalue AS ‘new value’

WHERE keyval = XXX

The Update statement with output clause also requires less IO - ie I've replaced three SQL Statements with one, using only a third of the IO. 

If you are not aware of the power of the output clause – I recommend you look at the output clause in books online

And finally here’s an example of the output produced using the Northwind database…

 

Posted by steveh99999 | 5 comment(s)
Filed under: ,
More Posts Next page »