July 2010 - Posts

Primary keys, unique indexes and unique constraints all enforce uniqueness. The difference between the primary key and the unique index/constraint is that the primary key does not allow any NULL values. However whilst the unique index/constraints do allow null values they only allow one of them. Well thats obvious isn’t it. If you have 2 NULL values then you have duplicates and that breaks the uniqueness.

Lets try it


create table TableWithUniqueNullableColumn (PkID int identity(1,1), UniqueCol int)


alter table TableWithUniqueNullableColumn add constraint UQ_TableWithUniqueNullableColumn_UniqueCol unique (UniqueCol )


insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

The second insert will fail :(

So how do you enforce uniqueness AND have multiple NULL values.

Well on SQL 2008 its easy you just have a filtered index that excludes NULL values.

create unique index UQ_TableWithUniqueNullableColumn_UniqueCol on TableWithUniqueNullableColumn(UniqueCol) where UniqueCol is not null

Now you should be able to insert multiple null values

insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

Ok thats fine but what about SQL 2005.

Well there is a trick, you can use a computed column that is the same values as your unique column unless it is null in which case it uses the PK value for your table.

alter table TableWithUniqueNullableColumn add SurrogateUniqueCol as isnull(UniqueCol,-PkID)

alter table TableWithUniqueNullableColumn add constraint UQ_TableWithUniqueNullableColumn_UniqueCol unique (SurrogateUniqueCol )

You will see that I am using –PKID this provides protection for overlapping values. It is up to you to make sure the values in your backup column, in this case PKID) don’t overlap with the value in your unique column. You could add a prefix or some other way of ensuring no duplicates.

Now with that in place you can insert null values and if you try and insert duplicate non null values it will fail.

Just try this, the first 3 should work and the last one should fail.

insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

insert into TableWithUniqueNullableColumn (UniqueCol) values (null)

insert into TableWithUniqueNullableColumn (UniqueCol) values (1)

insert into TableWithUniqueNullableColumn (UniqueCol) values (1)


[Update 30/7/2010] If you make the column persisted then you can reference the column with a foreign key.

Posted by simonsabin | 9 comment(s)

Today is the last day for voting for SQLBits sessions. Make sure you get your votes in, go to the Public Sessions page to vote.

Once the votes are in we will compile the agenda.

Posted by simonsabin | 2 comment(s)

Registration is now open for SQLBits 7.

If you want to attend Thursday or Friday AND Saturday then click the option for the relevant day and you can select you wish to attend the Saturday later on in the registration process.

With the Saturday being a free day its essential for us that people make the choice whether they are going of not.

Hope to see you there

Posted by simonsabin | 2 comment(s)

Microsoft are recruiting SQL Server support people. https://careers.microsoft.com/JobDetails.aspx?ss=&pg=0&so=&rw=1&jid=21950&jlang=EN

This job isn’t about understanding TSQL its about understanding programs, SQLServer.exe is a big program and you will need to be able to debug it to get to the bottom of you client support calls.

I’ve spoke to Bob Ward about the type of person that is needed and its not your enthusiast, you have to understand code to a deep level, probably include networking to debug those kerberos and authentication issues.

If you think you are up to it, why not apply. I for one would love to get my hands on the sqlserver.exe code but alas I’m not clever enough.

Posted by simonsabin | 4 comment(s)
Filed under:

I keep hearing little bits about whats coming in SQL 11 and I’m getting really excited. I comment about how certain things don’t get fixed and so when I see things do get fixed it makes me smile.

The latest is undo in SQL Server Integration services.My connect item hasn’t just been closed, but closed as fixed.


The just need to get the projects working in VS 2010 and away we go.


Posted by simonsabin | 4 comment(s)

We keep getting told that to get things changed with SQL Server we need to vote on connect items.

I decided to look at the top voted items


Beyond the service pack requests, 3 of the next 5 top requests are for

IntellliSense backward compatibility

SSIS VS2010 project type


I would love to see these in the product, the first two would make productivity so much better. The second just drives me nuts having to have multiple versions of VS installed IMHO screws with the performance of your machine. Especially on a 64 bit system.

Currently I have to have VS2005, VS2008 and VS2010 in order to support my clients and most of them are forward thinking but almost all have SQL 2005 systems still in place.

If you feel any of these would help you then make sure you vote.

Oh and I forgot the most important because this IMHO is a bug

User defined function performance is unacceptable

anyone using scalar user defined functions is shooting themselves in the foot. Vote on this item if you would like scalar user defined functions to perform as the should be and not 10 times slower.

Posted by simonsabin | 1 comment(s)

If you’ve been developing complex reports in reporting services then you will likely have resorted to custom code to solve issues like custom aggregations and custom colour palettes. Below are a set of suggestions that relate to developing custom code. All 4 of these are a royal pain in the a***, I can understand that the code editor isn’t a first class editor but why I really don’t know why they don’t have enumerators for the fields, parameters, variable collections or why you can’t access the datasets programmatically.

If you find this items would be useful please vote on them.


Being able to loop through the parameters


Having shared code so its not embedded in reports



Can’t use imports statement with custom code


Intellisense, compilation etc.


Access to datasets from custom code


Report Builder 3.0 has some minor additions that make your life a better world.

On that i’ve just found is the ability to know what renderer is being used. Why is that useful? Well in all the training I do on Reporting Services I always highlight the issues with Excel exporting.

The main issues are due to columns and cells being merged and thus preventing users from sorting data once its in excel.

One of the solutions is to optionally display items so they don’t cause these problems. Historically you’ve had to provide a parameter that asks the user if they want an Excel friendly format.

Well now you have a global variable RenderFormat that allows you to provide custom formatting based on the renderer that is being used.

To access the variable you can use the following.


In report builder 2.0 if you change from a local to a shared data source or the other way round and click ok the change isn’t saved.

This is clearly a bug in the forms IsDirty programming which detects if something has changed that requires saving.

To get round this you have change your datasource and then change the name of the data source click ok. The change of name forces the dialog to realise something needs saving and saves your change of datasource as well. Ideally you then need to go back in and change the name back.

Just another Report Builder nuance

Posted by simonsabin | 2 comment(s)

What do you think of the new xBox?

The winner of the xBox at SQLBits 6 should be getting their new XBox 360 soon, and I’m sure we will be giving out more at the next SQLBits in September.

I’m personally looking forward to kinetic . My son loves the Wii so it will be interesting what happens with Kinetic, being out at the start of Novemeber I guess its going to be a Christmas present.


Posted by simonsabin | 1 comment(s)
More Posts Next page »