July 2006 - Posts

Am building two SQL 2005 clusters on Windows 2003 Enterprise R2 up in Birmingham, nice bit of kit - four DL380's and two MSA500 G2 SAN's - they come as a cluster package.

Like most people, I'm using x64 versions for Windows 2003 and SQL Server 2005.

Ok, now the crunch; if you buy the 4 port adapter on the back of the MSA500 R2 and try and have controller redundancy (multi-path) between each node in the cluster and the SAN then there is no driver for x64 multi-path, there is for the MSA1000 but I'm not inclined to use another product's drivers even if it is the big brother to the MSA500.

I've been on to HP and nobody will commit to a date when the drivers will be available.

What a pain.

Oh, its buried in the documentation as well; I diagnosed the problem because you see two disks in disk administrator for each logical partition - very strange but understandable.

Tony

As I keep forgetting where these hot-fixes are held I'm putting the link here for reference.

http://www.microsoft.com/downloads/results.aspx?pocId=&freetext=Cumulative%20Hotfix%20SQL%20Server%202000&DisplayLang=en

:)

Probably the first you meet this thug is through this error message:

Msg 468, Level 16, State 9, Line 1

Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation.

Its at this point you scratch your head and think what on earth? You may well have just installed a new copy of SQL Server as part of a server upgrade or more probably a result of restoring a production database into your development SQL Server (quick tip: always check the collations of the master databases on the two installs to make sure they are the same, that way you won't end up installing the wrong collation, and note SQL_Latin1_ is different from Latin1_, check the collation using the DATABASEPROPERTYEX Collation property, an example is show below  ).

Lets get some stuff out of the way first, collations are like code pages within SQL Server, they allow you to hold different languages of data and sort orders.

If you got this error then open up a query window to your database and do this...

USE <yourdb>
GO

print 'My database [' + db_name() + '] collation is: ' + cast( DATABASEPROPERTYEX ( db_name(), N'Collation' ) as varchar(128) )

print 'My tempdb database collation is: ' + cast( DATABASEPROPERTYEX ( 'tempdb', N'Collation' ) as varchar(128) )

If you get the same collation returned from both then you are probably ok at the database level, for temporary (# or ##) table usage, if they return a differnt collation (and note these are different collations SQL_Latin1_ is not the same as Latin1_ like this for instance...

My database [Training_UKUGCopy] collation is: SQL_Latin1_General_CP1_CI_AS
My tempdb database collation is: Latin1_General_CI_AS

If thats different then you've immediately got a potential problem using temporary tables as shown here:-

create table test_collation (
   title varchar(500) not null
)

insert test_collation ( title )
   values( 'trevor dwyer' )

create table #test (
   title varchar(500) not null
)

insert #test ( title )
   values( 'trevor dwyer' )

select *
from test_collation t
   inner join #test tt on tt.title = t.title

Run this and you'll get this error:-

Msg 468, Level 16, State 9, Line 1

Cannot resolve the collation conflict between "SQL_Latin1_General_CP1_CI_AS" and "Latin1_General_CI_AS" in the equal to operation.

Why oh why? Its because of the CREATE TABLE #, your two databases are in different collations so when you create your # table the column 'title' gets the default database collation so when you try and join the two tables together and because of the collation conflict you get an error.

We can resolve this by fixing the collation on the # table...

create table #test (
   title varchar(500) COLLATE SQL_Latin1_General_CP1_CI_AS not null
)

This is not just limited to tempdb, if you have you are joining between different databases for example Customers, Sales and they have different collations then you'll get the same problem. Its also not limited to the server, if you are using linked servers then collation can be a problem too.

We've seen how to 'fix' the temporary table, another way is to 'fix' the join; because we know the collation of our local table we can cast the collation of our remote table, for instance:-

select top 10 *
from torversrv03.torver_ukug.dbo.registrations r
   inner join #test t on t.title = r.fullname COLLATE SQL_Latin1_General_CP1_CI_AI

You need to becareful on which side of the join you put the COLLATE otherwise you will hurt performance because you will turn what might a nice index seek into some form of scan or hash join...

Collations extend right the way from server to value (Server, Database, Column and value as intersected between column / row when using sql_variant, note the collation is not assigned to the sql_variant data type but to the actual value held within it...

declare @one sql_variant
set
@one = 'tony rogerson' COLLATE SQL_Latin1_General_CP1_CI_AI

declare @two sql_variant
set
@two = 'tony rogerson' COLLATE Latin1_General_CI_AI

if @one = @two 
   
print 'match'
else
   print 'no match'
go

declare @one sql_variant
set
@one = 'tony rogerson' COLLATE SQL_Latin1_General_CP1_CI_AI

declare @two sql_variant
set
@two = 'tony rogerson' COLLATE SQL_Latin1_General_CP1_CI_AI

if @one = @two 
   print 'match'
else
   print 'no match'

I think that will do as an introduction to collations and will hopefully cover any problem you have, there is a ton of other considerations that mainly extend out of SQL Server which I'm not inclined to go into in this entry.

There are many occaisons where you may want to use multiple filegroups, especially if dealing with large tables and fragmentation is a worry.

The script below shows a complete end-to-end example of how to create a file group and mark it as the default so all newly created objects are placed on there.

Note, there is nothing you can do about [PRIMARY] so if you have creation scripts or drop / recreate index scripts with [PRIMARY] in them then that data will always go on the [PRIMARY] file group.

-- Create the database
CREATE DATABASE test_filegroup
go

USE test_filegroup
go

-- Create a new file group
ALTER DATABASE test_filegroup 
   
ADD FILEGROUP NonClustIndexes
go

-- Add a file to the file group, we can now use the file group to store data

ALTER DATABASE test_filegroup
   ADD FILE (
      NAME = NonClustIndexes,
      FILENAME = 'E:\MSSQL\DEV\NonClustIndexes.ndf',
      SIZE = 5MB,
      
MAXSIZE = 100MB,
      FILEGROWTH = 5MB
      )
   TO FILEGROUP NonClustIndexes

go

-- Change the default filegroup for where new tables and indexes are created.
ALTER DATABASE test_filegroup
   MODIFY FILEGROUP NonClustIndexes
DEFAULT;

GO

-- Test this
CREATE TABLE xyz (
   mydata int not null constraint pk_xyz primary key clustered,

   ) ON [PRIMARY]

INSERT xyz values( 1234 )
GO

-- This shows that the table was created on the [PRIMARY] file group and not our new user defined group
sp_help xyz
go

-- This does not move the table
DBCC DBREINDEX( xyz )
go

-- Nor does this
CREATE UNIQUE CLUSTERED INDEX pk_xyz ON xyz( mydata ) WITH DROP_EXISTING
GO

-- This does though
CREATE UNIQUE CLUSTERED INDEX pk_xyz ON xyz( mydata ) WITH DROP_EXISTING ON NonClustIndexes
GO

-- Table is now located on the NonClustIndexes filegroup
sp_help xyz
GO

-- Still on the NonClustIndexes filegroup
DBCC DBREINDEX( xyz )
GO

-- And still there.
CREATE UNIQUE CLUSTERED INDEX pk_xyz ON xyz( mydata ) WITH DROP_EXISTING
GO

-- What about new objects?
CREATE TABLE z (
   mydata int not null constraint pk_z primary key clustered,
   )
GO

-- Correctly placed on our NonClustIndexes filegroup
sp_help z

 

The UK SQL Server User Group in combination with SQLskills.COM is pleased to bring you what we think is a really fantastic and unique SQL Server event with one of the world's leading SQL Server guru's - Bob Beuchemin.

Join us in Reading for this full day seminar

There are two ways to manage the growth requirements of database systems: scale-up; and scale-out. Scale-up comes from improved hardware (more cpu, faster cpu) and from extensive tuning by application developers and database administrators.

Scale-out requires distributing and partitioning data as well as distributing the data processing. This involves characterizing the different types of data in your application as all data is not the same with respect to the database. In addition, scaling and failover capabilities can be greatly improved by dividing the process implementation into composable services that can execute asynchronously or in parallel.

We'll discuss designing an application architecture that implements a data management methodology that fully reviews the purpose and usage of your data leading to effective scale-out designs. We will provide you with a comprehensive understanding of how to make your application scale by leveraging the new capabilities of SQL Server 2005 that are geared around a Service-Oriented Database Architecture. The revolutionary enhancements in the new SQL Server release enable new possibilities for scale-out using a reliable event-driven active database model that replaces the session-based connected database communication model that exists today. This leads to new design patterns that deliver increased throughput, leverage commodity hardware, and increase availability – empowering you to deliver applications that meet ever-increasing up-time and transactional demands.

For more information and full agenda please see http://sqlserverfaq.com/train/bb200608

The UK SQL Server User Group in combination with SQLskills.COM is pleased to bring you what we think is a really fantastic and unique SQL Server event with one of the world's leading SQL Server guru's - Kimberly Tripp.

Join us for a three day event which will immerse you into the world of performance tuning and optimization for SQL Server. For 3-days most labs and demos will be targeting SQL Server 2005, however, many techniques will work across both SQL Server 2000 and SQL Server 2005 and when something is version specific it’s limitations, restrictions and requirements will be discussed. More importantly, our classroom environment will include both versions.

This course started from a series of popular conference sessions and seminars – including sessions that have been top rated at conferences around the world – and now includes hands-on labs and significantly more time to really dive into the content. Together with these hands-on labs, the materials combine to create a cohesive, interesting and in-depth course.

Fast-paced and packed full of information, this is the way to understand better design practices that can improve resource utilization, scalability and overall system performance. If you want to improve performance and get a better understanding of SQL Server internals...this is the place to be!

Improve your performance today and acquire knowledge and skills that will continue to help you tomorrow!

Key features discussed: vertical and horizontal partitioning, table structures, index internals, statistics, how the optimizer chooses indexes, strategies for index coverage, procedure plans and plan caching, caching and recompilation, techniques to improve cache utilization, methodologies for finding performance problems.

For a full agenda and more information: http://sqlserverfaq.com/train/kt200608

This follows on from a problem I got in my Camel Case routine entry and also raised by Adam Machanic in the private MVP groups.

SELECT *
FROM
(
   SELECT x AS [data()]
   FROM
      (
         SELECT
'something'
         UNION
ALL
         SELECT
'something else'
         UNION
ALL
         SELECT
'something & something'
      ) y (x
)
   FOR XML PATH(''
)
)
z (final)

The problem with this is that it encodes the & as &amp; which is no good to us.

A way round this problem is to use XQuery to get the value from the XML type and convert it into varchar(max).

select (
   SELECT
mydata
   FROM
(
      SELECT x AS
[data()]
      FROM
      (
         
SELECT
'something'
         UNION
ALL
         SELECT
'something else'
         UNION
ALL
         SELECT
'something & something'
         ) AS y (x
)
      FOR XML PATH(''),
TYPE
   ) AS d ( mydata ) FOR XML RAW, TYPE ).value( '/row[1]/mydata[1]', 'varchar(max)'
)
      AS concat

And my 'fixed' camel case routine is as follows :-

set nocount on

declare @seq table (
   seq int not null primary key

)

declare @i int

set @i = 1

while @i <= 50
begin
   insert @seq values( @i
)
   set @i = @i +
1
end

declare @names table (
   word varchar(50) not
null

)

declare @breaks table (
   break_on_character char(1) not
null

)

declare @exclude table (
   subtext varchar(20) not
null

)

insert @names ( word ) values( 'colin leversuch-roberts being a pain lol' )
insert @names ( word ) values( 'jim von trapp'
)

insert
@breaks ( break_on_character ) values( ' ' )
insert @breaks ( break_on_character ) values( '-' )

insert @exclude ( subtext ) values( ' von' )

select CamelCase = 
   (
      select
Camel
      from ( select case when seq =

                           or ( substring( n.word, seq-1, 1 ) IN ( select break_on_character from @breaks )
 
                           and not exists ( select
*
                                            from
@exclude e
                                            where subtext = substring( n.word, seq-1, len( subtext )
)
                                    )
)
                         then upper( substring( n.word, seq, 1 )
)
                         
else lower( substring( n.word, seq, 1 )
)

                     end as [text()]
              from
@seq
              where seq <= len( n.word
)
              order by
seq
              for xml path( '' ), type ) AS c ( Camel
)
         for xml raw, type ).value( '/row[1]/Camel[1]', 'varchar(max)' )

from @names n