October 2006 - Posts

Just picked the majority of my crop, grown in a mixture of green house and garden here in Harpenden just north of London in the UK.

Its the effort of about 8 plants at a price of about £1.50.

I know the small ones are lethal as my wife made a big pan of soup and only put one in there and it was really hot :).

Anybody got any ideas what to do with them? I'm going to try and create some chillie oil by putting some in olive oil.

 

 

Did you know that when you use the INSERTED and DELETED system materialised tables in a FOR (AFTER) trigger they are actually materialised from the transaction log (LDF file). Why is this bad? Transaction log writing is sequential in nature and for optimum performance no other processes should be writing to the disk the transaction log is on (even other transaction logs for other databases), that way the disk head will be in exactly the correct position for the next write thereby reducing the latency for disk writes thereby allowing your transaction to commit faster.

But, did you also know that ALTER TABLE <blah> DISABLE TRIGGER <blah> only disables the functionality of the trigger, something else is going on that causes the same underlying additional writes to the transaction log which is the overhead of using the trigger, for instance I have a straight UPDATE, without the trigger it writes 843Kbytes to the transaction log, with the trigger it writes 80,028Kbytes – yes, that’s a tad more! Even disabling the trigger still causes the 80Mbytes to be written to the transaction log.

Note: I’m keeping this entry purely SQL Server 2000 (SP4 and SP4 with 2187 applied) tested on multiple servers; I’ll update for 2005 shortly, 2005 should use tempdb instead because of the version store.

Here is the script:-

Set up:-

CREATE DATABASE [play] ON  PRIMARY

( NAME = N'play', FILENAME = N'f:\MSSQL\MSSQL\data\play.mdf' , SIZE = 512000KB , FILEGROWTH = 10240KB )

 LOG ON

( NAME = N'play_log', FILENAME = N'f:\MSSQL\MSSQL\data\play_log.ldf' , SIZE = 512000KB , FILEGROWTH = 10240KB )

GO

EXEC dbo.sp_dbcmptlevel @dbname=N'play', @new_cmptlevel=80

GO

ALTER DATABASE [play] SET RECOVERY FULL

GO

 

create table mytest (

    id  int not null    identity primary key clustered,

    trade_quantity  int not null,

    pad_row char(4000) not null

)

create table summary (

    trade_quantity int not null

)

insert summary values( 0 )

 

set nocount on

declare @i int

set @i = 1

while @i <= 10000

begin

    insert mytest ( trade_quantity, pad_row ) values( @i, 's' )

    set @i = @i + 1

end

go

Ok, now the fun part; first lets test a straight update without the trigger.

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

update mytest

    set trade_quantity = trade_quantity + 1,

        pad_row = cast( trade_quantity as char(4000) )

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

Now, remember to substitute 20 with DB_ID() !

Results:-

--  NumberReads = 0

--  NumberWrites= 15

--  BytesWritten= 843KBytes

--  BytesRead   = 0

That’s good, we have only written to the LDF.

Now the trigger code, this is common logic, it updates a summary table; note – this additional write and materialisation from the LDF would also be there even if you are just querying the INSERTED and DELETED tables aka referential integrity triggers.

create trigger trg_test on mytest FOR update

as

begin

    update summary

        set trade_quantity = ( trade_quantity -

                             ( select sum( trade_quantity )

                               from deleted ) )

                           + ( select sum( trade_quantity )

                               from inserted )

 

end

go

And now the results…

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

update mytest

    set trade_quantity = trade_quantity + 1,

        pad_row = cast( trade_quantity as char(4000) )

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

--  NumberReads = 1344

--  NumberWrites= 1373

--  BytesWritten=  80,084KBytes

--  BytesRead   = 157,958KBytes

Look, we have had to do 157Mbytes of read FROM the LDF but also the write requirement has now grown from 843Kbytes to an astonishing 80Mbytes!

Now disable the trigger and try the test again…

alter table mytest disable trigger trg_test

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

update mytest

    set trade_quantity = trade_quantity + 1,

        pad_row = cast( trade_quantity as char(4000) )

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

--  NumberReads = 8

--  NumberWrites= 1355

--  BytesWritten= 80,082KBytes

--  BytesRead   = 32KBytes

The number of reads and bytes read has dramatically reduced now, but look at the number of bytes written – its still 80Mbytes! Stone me J I was expecting bytes written to be back to 843Kbytes!!

Lets drop the trigger and see what happens…

drop trigger trg_test

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

update mytest

    set trade_quantity = trade_quantity + 1,

        pad_row = cast( trade_quantity as char(4000) )

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

--  NumberReads = 0

--  NumberWrites= 15

--  BytesWritten= 843KBytes

--  BytesRead   = 0KBytes

Yep, back to normal now.

The amount of data written varies because in the non-trigger situation SQL Server is clever enough to only write to the log what needs to be changed, so it doesn't write the whole row, that is born out by running this statement multiple times, the first causes more writes because it has more 'changes' to do.

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

 

update mytest

    set trade_quantity = trade_quantity + 1,

        pad_row = replicate( 'a', 4000 )

go

 

select *

from ::fn_virtualfilestats( 20, 2 )

go

-- 1st run = 1063424bytes
-- 2nd run =  782848bytes
-- 3rd run =  782848bytes

When you have a trigger, because you may require INSERTED and DELETED system materialised tables then it has to put down the whole row to the transaction log (I think it's worse than that, it needs to put down the old and new rows to the log), this is because the INSERTED and DELETED tables are in column terms a copy of the base table, they do only contain the rows being processed, but you start to see why the trigger is so much more hefty.

Why disabling the trigger is not giving the minimalistic behaviour I’m not sure, there must be some internal storage engine requirement because the trigger is there; but to summarise – if you are using triggers please be aware of the above behaviour; if you have long running batch jobs that use triggers that you disable before commencing it is best to drop the trigger first rather than disabling it.

Also - be aware of things that use triggers, for instance merge replication!

SQLServerFAQ.COM UK SQL Server User Community
Providing the Database Professional with the tools to stay current

Lot's of stuff going on now, we have an event tonight (24th) in Reading and also one on the 29th Nov again in Reading, we are working on other venues - we are planning one in Scotland for January and need people to send us topic suggestions - please email Martin Bell, SQL Server MVP; I'm also presenting on the 1st November in Oxford repeating the session I did at the VBUG conference.

I've been doing a lot of cluster work of late, building 2005 HA environments, one specific client we are using an EMC CX300 and having really big problems getting good performance on log writes, a write to the log is done in chunks of 512bytes to 64KBytes and the SAN just can't handle these small IO's very well; I expand in a blog entry how to go about seeing what transaction log throughput you get from your kit - extremely important because writing to the log is critical for a well performing and scalable system.

Don't forget to check out the events section, some good, albeit a bit BI orientated but I'll fix that; check out the 2 day SQL Tuning class and the 5 day BI boot camps.

PS.. No word on SP2 beta yet and if you are on SP4 consider moving to hotfix 2187; too many problems with SP4 imho.

Tony Rogerson, SQL Server MVP
tonyrogerson@sqlserverfaq.com
24th October
2006
 

Build a Microsoft Cluster environment for less than £17,000
Most work I do is for small to medium size companies, some with enterprise size systems some not; there is more interest in high availability and resilience in this domain now - not just playground of the money to burn enterprise companies. Anyway, in this post I comment on the HP MSA500 packaged cluster and what you can build for < £17K.

An average system (7 x 146GB disks, 4 in RAID 10, 2 in RAID 1 and 1 hot spare, 5GB of RAM) costs.....

Link to blog entry
 

SQL Server 2005 Rebuilding Indexes
Simon Sabin talks about problems he's had rebuilding indexes and performance reasons for doing so when upgrading.

Link to blog entry
 

Server Commissioning Part #1 - Transaction Log and Performance
For any write activity to the database the write performance to the transaction log is critical. If you have high latency on writing to the transaction log then your process will be slow. This article comes from problems Tony Rogerson has had building a cluster using the EMC CX300 and some must knows when building and trialling your kit.

Link to blog entry
 

Comments on the UPDATE statement
What's best? Specify only the columns you are updating or does it matter?

Link to blog entry
 

About the Group, UK's biggest SQL Server community

The group has nearly 9,500 members in the UK, the site holds over 900,500 news group posts, has over 300 distinct daily visitors generating in excess of 3,000 sessions. The site holds a growing number of short video demonstrations on SQL Server that can be viewed in Media Player.

We are one of the CodeZone Federated Search members, you will see us in the help system built into Visual Studio 2005 and SQL Server 2005—we made it into the product, my work is done :-)!

The group was founded in 1998 and is still run by Tony Rogerson who has been awarded SQL Server MVP (Most Valued Professional) for 8 years running, as well as running the group his day job is an Independent SQL Specialist/Consultant.

If you do not want to receive this briefing again then please reply with the subject 'remove' or logon to http://sqlserverfaq.com and adjust your profile.

Did you know SQL Server is clever enough not to log stuff it doesn't really need to.

So, this statement ->

UPDATE mytest 
   SET smallcol = smallcol +
1
     , somedata =
somedata
WHERE id = @i

Is the same as writing this ->

UPDATE mytest 
   SET smallcol = smallcol +
1
WHERE id = @i

Also, it's not the whole 8KB page that gets logged, it's just the data that's changed, so in this case the column smallcol; the above statements actually cause 512bytes write activity to the log.

Ok, referencing back to my commission server article increasing the number of rows in the transaction improves performance dramatically in this case; the single row, single integer column update requires a log write of 512 bytes per row, so for 100,000 rows its 48MBytes; however, changing the transaction so that 10000 rows are updated per transaction (still 10,000 update statements per commit) we get 11MBytes written, what a difference!

Finally, doing the UPDATE in one go as in below gives our best performance (doh, no suprises there!) just 9MBytes written to the log in 168 IO's.

UPDATE mytest 
   SET smallcol = smallcol +
1

I suppose this is more directly applicable to anybody doing batch job cursor updates than anything else.

I guess the moral of the story is this: when writing your update or population routines always go for the option that creates the least IO's; that means benchmarking using the statistics from ::fn_virtualfilestats (NumberWrites and NumberReads).

It’s quite difficult to work out if your kit will support the load that you want to put on it. Through my consultancy years I’ve often been in the position of trying to determine whether an existing system will be capable of scaling up to more users and more data – this is a difficult task, but there are methods.

 

In this article I suggest a method for determining the performance and ability of your hardware in servicing the write load for your system, specifically determining how many updates/inserts/deletes can be done with the performance of the transaction log being the only factor, so discounting concurrency (for the moment!).

 

We need to consider a couple of things before going any further, firstly an INSERT / UPDATE / DELETE is not complete and SQL Server will not allow the process to continue until it has been told that the data physically resides on in the transaction log on the disk (the LDF file), the MDF is written in a different matter, the throughput of INSERT’s, UPDATE’s and DELETE’s is directly related to how fast you can get data into the LDF on disk whereas the MDF is written in a different manner – dirty pages (changes) are batched up and written as a big burst of data around checkpoint time or when the SQL Server decides it needs the pages for some other data so has to write them out to disk.

 

Straight to the test, create a test database, change G: to where you want to test your transaction log performance and E: to your MDF file; try and keep them separate especially for this test but in practice writing to the transaction log is almost purely sequential and the MDF random so you want the transaction log on its own disks where nothing will be changing the disk head position.

 

CREATE DATABASE [IOMeter] ON  PRIMARY

( NAME = N'IOMeter', FILENAME = N'e:\IOMeter.mdf' , SIZE = 1048576KB , FILEGROWTH = 1024KB )

 LOG ON

( NAME = N'IOMeter_log', FILENAME = N'g:\IOMeter_log.ldf' , SIZE = 1048576KB , FILEGROWTH = 10%)

GO

EXEC dbo.sp_dbcmptlevel @dbname=N'IOMeter', @new_cmptlevel=90

GO

GO

ALTER DATABASE [IOMeter] SET RECOVERY FULL

GO

ALTER DATABASE [IOMeter] SET PAGE_VERIFY CHECKSUM 

GO

 

Let’s create our test table, this can be one of your own tables, in fact if you have the time you could actually use your own schema and test a specific stored procedure performance etc. but for this example I just want to illustrate some log behaviour so I’ll use a simple table…

 

USE IOMeter

GO

CREATE TABLE mytest (

    id  int not null identity primary key clustered,

    somedata char(4000) not null

)

GO

CREATE TABLE io_results (

    test_name varchar(100) not null,

    entry_date  datetime    not null default( getdate() ),

    write_ios   int         not null,

    write_bytes bigint      not null,

    transaction_size int    not null,

    ios_per_row decimal( 10, 3 ) not null,

    ios_per_xact decimal( 10, 3 ) not null,

    bytes_per_io decimal( 10, 3 ) not null,

    mbytes_per_sec decimal( 10, 3 ) not null,

    rows int not null,

    commits int not null,

    duration_seconds int not null

)

 

Now for the test, the is your test harness, if you are going to use your own stuff then I would suggest putting it in another stored procedure and replace the INSERT mytest with a call to your stored procedure.

 

CREATE PROC test_perf

    @test_name varchar(100),

    @tran_batch_size int

AS

BEGIN

    SET NOCOUNT ON

 

    IF @@TRANCOUNT > 0

        ROLLBACK            --  Make sure no transaction open

 

    WAITFOR DELAY '00:00:05'    --  System settle, helps seperate stuff in PERFMON

 

    TRUNCATE TABLE mytest

 

    CHECKPOINT                  --  Write any dirty pages to disk

 

    DBCC DROPCLEANBUFFERS       --  Empty data cache

    DBCC FREEPROCCACHE          --  Empty execution cache

 

    ALTER DATABASE IOMeter SET RECOVERY SIMPLE

    CHECKPOINT

    ALTER DATABASE IOMeter SET RECOVERY FULL

 

    WAITFOR DELAY '00:00:05'    --  System settle, helps seperate stuff in PERFMON

 

    DECLARE @i int

    DECLARE @c int

    SET @i = 1

    SET @c = 0

 

    SELECT 'START' AS op, GETDATE() AS op_time, *

    INTO #filestats

    FROM ::fn_virtualfilestats( db_id(), 2 )

 

    BEGIN TRAN

   

    WHILE @i <= 100000

    BEGIN

        INSERT mytest ( somedata ) VALUES( CAST( @i AS char(4000) ) )

 

        IF @i % @tran_batch_size = 0

        BEGIN

            SET @c = @c + 1

            COMMIT TRAN

            IF @i < 100000

                BEGIN TRAN

        END

 

        SET @i = @i + 1

    END

    IF @@TRANCOUNT > 0

        COMMIT TRAN

 

    INSERT #filestats (

        op, op_time, DbId, FileId, TimeStamp, NumberReads, BytesRead,

        IoStallReadMS, NumberWrites, BytesWritten,

        IoStallWriteMS, IoStallMS, BytesOnDisk, FileHandle )

    SELECT 'END', getdate(), DbId, FileId, TimeStamp, NumberReads, BytesRead,

        IoStallReadMS, NumberWrites, BytesWritten,

        IoStallWriteMS, IoStallMS, BytesOnDisk, FileHandle

    FROM ::fn_virtualfilestats( db_id(), 2 )

 

    INSERT io_results (

        test_name,

        write_ios,

        write_bytes,

        transaction_size,

        ios_per_row,

        ios_per_xact,

        bytes_per_io,

        mbytes_per_sec,

        rows,

        commits,

        duration_seconds )

    SELECT @test_name,

           Write_IOs   = e.NumberWrites - s.NumberWrites,

           Write_Bytes = e.BytesWritten - s.BytesWritten,

           transaction_size = @tran_batch_size,

           IOs_Per_Row = CAST( (e.NumberWrites - s.NumberWrites) AS decimal( 10, 3 ) ) / 100000,

           IOs_Per_Xact= CAST( (e.NumberWrites - s.NumberWrites) AS decimal( 10, 3 ) ) / @c,

           Bytes_Per_IO= CAST( ( e.BytesWritten - s.BytesWritten ) AS decimal( 20, 3 ) ) / (e.NumberWrites - s.NumberWrites),

           MBytes_Per_Sec = ( CAST( ( e.BytesWritten - s.BytesWritten ) AS decimal( 20, 3 ) ) / DATEDIFF( second, s.op_time, e.op_time ) ) / 1048576,

           Rows        = (SELECT COUNT(*) FROM mytest),

           Commits     = @c,

           duration_seconds = datediff( second, s.op_time, e.op_time )

    FROM #filestats s

        CROSS JOIN #filestats e

    WHERE s.op = 'START'

      AND e.op = 'END'

 

END

GO

 

If you are doing this using SQL Server 2000 you will need to modify the call to ::fn_virtual_filestats slightly – instead you need to hardcode the database id because db_id() won’t work.

 

Now our test, the point is to try different sizes of transaction so that you get a range of blocksizes being written to disk – this is what kills the SAN performance at a client where I was testing the CX300.

 

exec test_perf 'test 1', 1

go

exec test_perf 'test 1', 10

go

exec test_perf 'test 1', 100

go

exec test_perf 'test 1', 1000

go

exec test_perf 'test 1', 10000

go

exec test_perf 'test 1', 100000

 

And the results….

 

select commits, write_ios, write_bytes, transaction_size, ios_per_row, ios_per_xact, bytes_per_io, mbytes_per_sec, duration_seconds

from io_results

order by entry_date

 

 

As can be clearly seen the larger the transaction size the better throughput you get to the disk, note – the above is done against a SATA disk with DASD on my development box with nothing else using the disk (that is very important for these tests!!).

 

Look at the difference between a transaction size of single row to that of 1000 rows, the total amount written to the log with 1 row in the transaction is 467Mbytes with 1000 rows its 427Mbytes but more importantly look at the Mbytes per second we get and the duration difference, single row transaction is only managing 24Mbytes per second and a duration of 19 seconds, the 1000 row is a healthy 53Mbytes per second and 8 seconds duration!

 

Let’s look at the performance monitor graph to see what it’s telling us.

 

 

Reading from the left, 1 row transaction, 10, 100, 1000, 10000 and 100000.

 

There is considerably more log flushes/sec for smaller transaction sizes, also the log flush waits/sec is considerably higher the smaller the transaction size.

 

The graph also tells me that my optimum transaction blocksize is the one around 100 rows (equates to an IOP of around 56Kbytes). The log bytes flushed/sec levels out at that level and the Avg. Disk Write Queue Length starts to grow faster.

 

You can try this without even installing SQL Server, SQL Server does not allow outstanding IO when writing to the log, writes are done asynchronously yes but directly through the cache and too disk. SQL Server doesn’t make use of the Windows system cache. The way to try this performance test without installing SQL Server is to use IOMeter; download this free tool from http://sourceforge.net/projects/iometer/ and set up your tests using the following :-

 

On [Disk Targets] select the drive you want to test, add 1024 for Maximum Disk Size in sectors and make # Outstanding I/O’s 1; on [Access Specifications] choose 4K; 0% Read; 0% random and click Edit Copy, now change Transfer Request Size to the size of the transaction you want to simulate (4K, 64K etc…) and leave everything else.

 

You can play – try changing the Outstanding I/O’s to a higher number say 64 and you’ll see performance get significantly better for example with 1 Outstanding I/O my 64K test I’m getting 43Mbytes per second with 64 Outstanding I/O’s I get 82Mbytes per second. This is where the SAN people bastardise the tests – SAN’s need Outstanding I/O to bulk the data up so it can use larger IOP sizes, well – that’s my experience so far and I know its limited – EMC CX300 but there you go – people – please try this at home and feed me the results back for different SAN’s….

 

To summarise, what’s the point in doing all this? Well, SQL Server when writing data is absolutely dependant on the speed it can write stuff out to the transaction log, if you have an application (OLTP) with lots of recurrent small transactions (trading system for instance) then you’ll see big differences in system performance and scalability by getting the IO subsystem to support the load and the method above can be used to help prove that.

 

As with everything it’s not that simple, but, it goes a way towards helping you prove your performance and give a better guestimate as to scaling.

 

There is so much more to say on this and I'll try and a Part 2 soon, I'll also be doing other entries on this subject to cover the other stuff - concurrency, MDF writes etc... It will be there when time permits....

 

 

Recently I was asked to cost up a new cluster and I was astonished at how cheap it is now - HP Cluster, 5GB per machine, 7 x 146GB disks for under £17,000.

 

The cost of creating and using a Microsoft Cluster has significantly dropped of late, in fact, in my honest opinion its now practical for Small – Medium size businesses or systems.

 

Take the HP ProLiant DL380 G4 Packaged Cluster with MSA500 G2 for instance, places like Inmac and PC World are kicking the base configuration (2 x DL380’s and the MSA500) for a mere £4,781, ok that is with 1GB of RAM and no disks and no software but that price is really good.

 

A 146GB 10Krpm Ultra320 SCSI disk knocks out at around £215 and memory – a 2GB kit you can get for £300 (£600 for 2 machines).

 

Windows 2003 Enterprise R2 x64 version knocks out at around £2,100 of which you’ll need two copies and SQL 2005 Standard x64 will set you will cost you £4000 per physical processor and if you a) only have 1 physical processor (remember, you don’t pay for cores) and b) only use the cluster in an active/passive scenario (SQL Server is only running on one of the machines at a time) then you only need pay £4000.

 

Two node clustering is included with the standard edition of SQL Server 2005!

 

So, an average system: 7 x 146GB disks, 4 in RAID 10 and 2 in RAID 1 and 1 hot spare, 5GB of RAM will cost you:

         

          Package base           £4,781

          7 disks                    £1,505

          4 2GB kits                £1,200

          Windows 2003         £2,000

          SQL 2005                £4,000

          Kit Total                  £13,486

 

Ok, there are still some bits on top of that that you need to consider – racking, cabling into existing infrastructure, keyboard, mouse and monitor if required; but a complete working cluster for under £15,000 is special!

 

If you wanted me to come build it for you, it usually takes me 2 days to install, configure and commission (test) and that will set you back another £1,700.

 

So, to conclude – clustering really is in the price range of everybody now.

 

The actual architecture I recommend to people is cluster your production server and have a hot standby if possible but definitely have a remote disaster recovery server, the hot standby would sit next to the cluster and would protect you in case the MSA500 itself fails, you use database mirroring (full safety) for that, the remote DR server you need to log ship to because its usually across a link that has lower latency (don’t mix bandwidth available with throughput) and because standard edition of SQL Server doesn’t do asynchronous mirroring then you need to use log shipping.

 

If I get time I’ll knock a diagram up, any questions?

The attached ZIP file contains all the sample code I used at todays VBUG conference.

Some notes: to do the concurrency tests you need to first set the is_running flag to 'N' and then copy the test script into 5 or more connection windows executing them in turn, the short cut keys to do that in Management Studio are CTRL N (New Query), CTRL V (Paste) and CTRL E (Execute).

Once they are running (and waiting) then you go to the Results.SQL and update the flag to 'Y' and then wait a while until they all finish - check the output on each to see if you got a PK violation or deadlock.

Any questions then don't hesitate to either post a comment here (preferred so everybody gets the benefit) or email me directly to tonyrogerson@sqlserverfaq.com.

I will be repeating this session at my own user group evening next Tuesday and also at the NextGen UG meeting in Oxford on the 1st Nov. If I get time and my machine set up I'll record a blogcast of the presentation and post it here too.

Remember, concurrency are often difficult to track, for instance some of the scripts work fine on my laptop (at different times too) because the hardware is slower, so always be aware of SQL locking behaviour - often, production boxes are more powerful and have multiple processors....

 

The next UK User Community meeting is next Tuesday (24th Oct) in Reading, places are strictly limited so if you want to register then register quickly and don't forget our meetings are free.

It's also your opportunity to gain fame and exposure; we are doing a trail session where attendees are encouraged to come on stage and give a 5 minute SQL tip; don't forget even the simplest of tips may be of use to other SQL professionals so don't be afraid!! Tips, Technical explainations, experiences - anything related to SQL Server - no marketing pitches though!

Agenda

Registration is at 5.30, evening will commence at 6pm and finish 9pm.

6pm - 6.30pm
Round Table Discussion
Update on what's been going on and is going on in the SQL Server space.

Bring your SQL problems and ask the audience, bounce ideas - anything related to SQL Server.

6.30pm 7.30pm
Techniques for developing against SQL Server
Tony Rogerson, SQL Server MVP

In this session Tony will talk through issues that directly effect developers when implementing solutions against Microsoft SQL Server; Collations (SQL v Windows, considerations for performance, storage and joining), enhancements to FOR XML in SQL Server 2005 that will benefit you with set based concatenation and string manipulation, Programming (SET v SELECT for value assignment and associated problems, introduction to Sub-Queries, ORDER BY in a VIEW) and finally tackling scalability
a cursory discussion and demonstration on single and concurrent query testing.

7.30 7.45pm
Break

7.45pm - 8.30pm
Asynchronous Processing in SQL Server
Simon Sabin, SQL Server MVP

If you want to achieve the best performance for your users you have always been told to reduce the length of your transactions. You DBA is horrified when you want to use xp_cmdshell or the sp_OA sps in a trigger to call an external app. So how do you get all that processing done. If we need to maintain/update external systems transactionally we are faced with a big head ache, consistency or performance.

SQL Server is a large multi threaded application, in this session we will look at how we can get SQL Server to perform asynchronous processing, guaranteed and transactionally. Typical examples are logging of usage statistics, updating external systems and batch processing.

All hopefully will keep your DBA happy.

8.30pm - 9pm
Bring and share a SQL Server tip - open Microphone session
Whistle stop tips and tricks session; anybody gets '1' – 'x' minutes to share a tip centred around SQL Server.

For more information http://www.sqlserverfaq.com?eid=82.

Here I sit on a wet, windy and thundery Sunday afternoon getting ready for my consultancy visits this week, fed up because there isn't enough time in the day and catching up on that horrible admin stuff and behold I receive and email - its from MVP Award admin and about my Microsoft MVP Award.

Yippeee - I've been reawared for 2007 for my activities in the SQL community; that's 9 years running now.

"The Microsoft MVP Award is our way of saying thank you and to honor and support the significant contributions you make to communities worldwide. As a recipient of Microsoft’s Most Valuable Professional award, you join an elite group of technical community leaders from around the world who foster the free and objective exchange of knowledge by actively sharing your real world expertise with users and Microsoft.  Microsoft salutes all MVPs for promoting the spirit of community and enhancing people’s lives and the industry’s success everyday.  To learn more about the MVP Program, visit: www.microsoft.com/mvp."

Ok, sorry for blowing my own trumpet blah blah, but this award is quite special and significantly helps the SQL user group over here which you will see going through a change in the coming months as my time frees up and I start to disolve some of the responsibilities out to other UK community champions....