SQL Server

Abstract:

The technique of Recency Frequency Intensity/Monetary is a powerful analytical technique for identifying data patterns and business performance. An introduction to the technique will be given, however the main focus of the session will be on demonstrating on how RFI/M can be performed using a number of SQL features such as Data Windowing, the OVER clause and PARTITION BY, CROSS APPLY and Common Table Expressions and how you can nest the table expressions. The session should be of benefit to both inexperienced and experienced SQL coders and analysts, each construct will be explained as well as the query plans produced. Demo's will be done on AdventureWorks which we actually discover is going out of business!

Feel free to email me with any quesitons, comments and suggestions - tonyrogerson@torver.net  

http://sqlblogcasts.com/blogs/tonyrogerson/UKSSUG/RF_SQLBits2010%20-%20ACC%20and%20TR3.zip

 

Allan Mitchell and myself are doing a double act, Allan is becoming one of the leading guys in the UK on StreamInsight and will give an introduction to this new exciting technology; on top of that I'll being talking about SQL Server Disk IO - well, "Disk" might not be relevant anymore because I'll be talking about SSD and IOFusion - basically I'll be talking about the underpinnings - making sure you understand and get it right, how to monitor etc... If you've any specific problems or questions just ping me an email tonyrogerson@sqlserverfaq.com.

To register for the event see: http://sqlserverfaq.com/events/217/SQL-Server-and-Disk-IO-File-GroupsFiles-SSDs-FusionIO-InRAM-DBs-Fragmentation-Tony-Rogerson-Complex-Event-Processing-Allan-Mitchell.aspx

18:15 SQL Server and Disk IO
Tony Rogerson, SQL Server MVP
Tony's Blog; Tony on Twitter


In this session Tony will talk about RAID levels, how SQL server writes to and reads from disk, the effect SSD has and will talk about other options for throughput enhancement like Fusion IO. He will look at the effect fragmentation has and how to minimise the impact, he will look at the File structure of a database and talk about what benefits multiple files and file groups bring. We will also touch on Database Mirroring and the effect that has on throughput, how to get a feeling for the throughput you should expect.

19:15 Break

19:45 Complex Event Processing (CEP)
Allan Mitchell, SQL Server MVP
http://sqlis.com/sqlis

StreamInsight is Microsoft’s first foray into the world of Complex Event Processing (CEP) and Event Stream Processing (ESP).  In this session I want to show an introduction to this technology.  I will show how and why it is useful.  I will get us used to some new terminology but best of all I will show just how easy it is to start building your first CEP/ESP application.

Table Variables are not Transactional, that is to say BEGIN TRAN....ROLLBACK has no effect on them; we can use that to our advantage.

Have you ever been in the situation where you use a log file within your application to log progress, unfortunetly because your process is in a transaction you lose what has happened, well - not anymore - just use a table variable.... 

create table my_log_file (

    msg varchar(1024)

)

declare @t table ( msg varchar(1024) )

begin tran

--  some processing

--  ...

--  ...

--  logging as we go....

insert my_log_file ( msg ) values( '1' )
insert my_log_file ( msg ) values( '2' )
insert my_log_file ( msg ) values( '3' )
insert my_log_file ( msg ) values( '4' )
insert @t ( msg ) select msg from my_log_file

rollback

select * from my_log_file
select * from @t

SQL Server 2008 SP1 Cumulative Update (CU5) link: http://support.microsoft.com/kb/975977/LN/

Details on how they have fully fixed the OPTION( RECOMPILE ) Parameter Embedding Optimisation : http://support.microsoft.com/kb/976603/

What is Parameter Embedding Optimisation, well its basically you can do this...

WHERE email = coalesce( @email, email )
      AND name = coalesce( @name, name )
OPTION ( RECOMPILE )

Basically, if @email is NULL then the optimisers optimises out that clause in the query thus giving better access to index choices and negating the need for dynamic SQL.

A very cool feature that came in with RTM, was disabled in CU4 because of the bug and fixed and renabled in CU5.

 

Andrew Fryer has done a number of blog posts on Virtualisation for DBA's for SQL Server - very good and informative. 

http://blogs.technet.com/andrew/archive/2009/10/16/virtualisation-for-the-dba-part-1-are-you-bovvered.aspx

http://blogs.technet.com/andrew/archive/2009/10/12/virtualisation-for-the-dba-part-2-sql-server-management.aspx

http://blogs.technet.com/andrew/archive/2009/10/20/virtualisation-for-the-dba-part-3-sql-server-performance.aspx

http://blogs.technet.com/andrew/archive/2009/10/21/virtualisation-for-the-dba-part-4-licensing-and-support.aspx

 

There is so much out in the cosmos for helping with the manageability of SQL Server, this particular download from Microsoft is the early concept that later became the Management Data Warehouse in SQL Server 2008 which frankly is way cool.

Anyway - this really cool download helps you find bad queries, check waitstats, anyway - here is the overview from the download page....

Overview

The SQL Server 2005 Performance Dashboard Reports are Reporting Services report files designed to be used with the Custom Reports feature introduced in the SP2 release of SQL Server Management Studio. The reports allow a database administrator to quickly identify whether there is a current bottleneck on their system, and if a bottleneck is present, capture additional diagnostic data that may be necessary to resolve the problem. For example, if the system is experiencing waits for disk IO the dashboard allows the user to quickly see which sessions are performing the most IO, what query is running on each session and the query plan for each statement.

Common performance problems that the dashboard reports may help to resolve include:
- CPU bottlenecks (and what queries are consuming the most CPU)
- IO bottlenecks (and what queries are performing the most IO).
- Index recommendations generated by the query optimizer (missing indexes)
- Blocking
- Latch contention
 
 

Remember its SP2 of SQL Server 2005 you need.

Say you have a table and that table has a secondary history table so you can do windowing, for instance the settings on a specifc entity like the date when we started retaining fees on a debtor account say.

Ordinarily most people would resort to a large series of OR's to check the inserted and deleted tables to see what has changed.

Instead, you can use GROUP BY, DISTINCT and HashBytes to work out what rows (what key) has changed and actually requires sorting (updating the end date on the base table inserting a new row with the new settings).

The trigger below only shows the SELECT statement that gives you the complete logic I'm talking about, of course - this is test only - you should no longer be putting SELECT's in triggers but it highlights the behaviour...

create table some_base_table (

 

    batch_id        int not null primary key clustered,

 

    batch_name      varchar(250) not null,

    some_parm       int null,

    some_parm2      int null

)

 

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 1, 'hello fred', 1, 2 )

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 2, 'hello fred', 1, 2 )

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 3, 'hello fred', 1, 2 )

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 4, 'hello fred', 1, 2 )

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 5, 'hello fred', 1, 2 )

insert some_base_table ( batch_id, batch_name, some_parm, some_parm2 )

    values( 6, 'hello fred', 1, 2 )

 

create trigger trg_some_base_table_check_for_changes on some_base_table for update

as

begin

 

    select batch_id

    from (

        select distinct batch_id, hash_combined = hashbytes( 'sha1', combined )

        from (  select batch_id,

                       combined =(  select batch_id, batch_name, some_parm, some_parm2

                                    from deleted c       --  need old values

                                    where c.batch_id = d.batch_id

                                    for xml path( '' ) )

                from deleted d

                union all

                select batch_id,

                       combined =(  select batch_id, batch_name, some_parm, some_parm2

                                    from some_base_table c       --  need current values (could use inserted here)

                                    where c.batch_id = d.batch_id

                                    for xml path( '' ) )

                from deleted d

            ) as r

        ) as c

    group by batch_id

    having count(*) > 1

 

end

go

 

 

--  test

 

update some_base_table

    set batch_name = lower( batch_name )

where batch_id = 2


 

update some_base_table

    set batch_name = lower( batch_name )

Update 20091022:

HashBytes only takes 8000 bytes so if you have a long row with lots of column you break this and get a binary truncation error; so - instead of using HashBytes use CHECKSUM like discussed earlier in the comments, CHECKSUM works across the varchar(max) the for xml gives you. The important thing here is the use of FOR XML to get around the incompatible data type problem.

 

 

Lots of really useful information: http://support.microsoft.com/kb/2001270/en-us

A must read for anybody with or considering using database mirroring.

Tony.

Don't forget tomorrow night is the first Cardiff based user group meeting where I will present on Ensuring Business Continuity (Backups, Mirroring and Monitoring) and Allan Mitchell will do SQL 2008 Integration Services - some of the things that make it better, details: http://sqlserverfaq.com?eid=198.

Also tomorrow night there is an Edinburgh user group meeting where Rob Carrol will present on "Your own Performance Analysis Toolkit" and Martin Bell will do an "Introduction to Powershell", details:
http://sqlserverfaq.com?eid=202.

Mark Whitehorn is organising a MSc in Business Intelligence - a 1 year full time or 2 year part time course at the Dundee university, for more information check my
blog about it.

There is a permanent DBA job going at the Old Mutual Asset Managers (
see details), for more information just email me Tony Rogerson and I'll give you the details - no agencies though!

I posted the decks and examples from my talk at the SQL Server Data Management Conference last Tuesday -
click for the link.

Allan Mitchell and myself will be starting weekly LiveMeetings again from the end of this month - I'll get the details and abstracts on the site later in the week.

Next week we have a Reading meeting on Wednesday at TVP which will have a developer/dba focus (agenda will be confirmed this week); also we have a Manchester user group meeting on the Thursday which will be a Reporting Services evening.

If you don't want these emails just reply to me with remove in the subject.

Many thanks

Tony Rogerson
SQL Server MVP
http://sqlserverfaq.com
http://sqlblogcasts.com/blogs/tonyrogerson

Note that I will be going into more detail in a series of 30-40 minute LiveMeetings starting towards the end of October, please see sqlserverfaq.com for more information - I'll be posting the info this weekend so check back after then. The session abstract: This predominately demo based session explores the features around the management of SQL Server 2008, specifically the Management Data Warehouse (Data Collector) and its use as a problem determination tool and for capacity planning, using Powershell and SQLCmd for routine maintenance and its use in applications, Event Notifications and Extended Events for keeping an eye out for unfolding problems, Policy Based Management to protect your Production environment and Dynamic Management Views (DMV's) for your day to day administration. The demos will be used to build a real world working system.

If you want a copy of the database for the Data Collector (MDW) demo workload you can download it here: http://sqlcontent.sqlblogcasts.com/meetings/ms20090929_datacollectiondemo.zip (75MB); make sure you check out the attached ZIP which contains the script prompt I used which will help you find your way through - any questions just email me tonyrogerson@torver.net.

The SQL Demos and my demo talk script can be got from here: http://sqlcontent.sqlblogcasts.com/meetings/MS20090929_SlideDeck_SQLDemos_TalkNotesScript.zip 

References

Enterprise Policy Management Framework

http://www.codeplex.com/EPMFramework

SQL Server Policy Based Management Blog

http://blogs.msdn.com/sqlpbm/archive/2008/04/20/policy-based-management-overview.aspx

SQL Server 2008 Feature Pack

http://www.microsoft.com/downloads/details.aspx?familyid=B33D2C78-1059-4CE2-B80D-2343C099BCB4&displaylang=en

Keep up-to-date with any new policies

Get Report Builder 2.0“Administering Servers by Using Policy-Based Management” – “Configuring Alerts to Notify Policy Failures”: ms-help://MS.SQLCC.v10/MS.SQLSVR.v10.en/s10de_4deptrbl/html/ef2a7b3b-614b-405d-a04a-2464a019df40.htm

Central Management System

http://sqlcms.codeplex.com/ (Buck Woody Microsoft)

Powershell

Free Powershell scripts -> http://www.idera.com/Products/Free-Tools/PowerShell-scripts/

Extended Events

http://www.codeplex.com/ExtendedEventManager/ (by Jonathan Kehayias SQL MVP)

Event Notifications

http://sqlblogcasts.com/blogs/martinbell/archive/2009/05/22/Event-Notifications.aspx

http://sqlblogcasts.com/blogs/martinbell/archive/2009/05/26/WMI-Event-Alerts.aspx

 

More Posts Next page »