SQL and the like

Dave Ballantyne's blog. Freelance SQL Server database designer and developer at Clear Sky SQL

July 2010 - Posts

Count(*) or Count(1) ?

Every so often on the on-line forums, the question of : Which is better Count(*) or Count(1) ? will occur.  So can it categorically be proved than one is better than the other ?

What we need to do is populate a table and use both count(*) and count(1) and see if we get any performance difference between the two.

Code Snippet
  1. drop table largeTable
  2. go
  3. create table largeTable
  4. (
  5.   id integer not null
  6. )
  7. go
  8. declare @v integer
  9. insert into largeTable (id)
  10. Select top 1000000 ROW_NUMBER() over(order by @v)
  11. from sys.columns a cross join sys.columns b cross join sys.columns c
  12. go
  13.  
  14. select COUNT(*) from largeTable
  15. go 100
  16. select COUNT(1) from largeTable
  17. go 100

After executing the above code count(*) executes in an average of 107 milliseconds and count(1) executes in an average of 108 milliseconds.  So, there is no difference in performance.

What if though we use both count(*) and count(1) in the same statement ?  The average execution sound be in the order of 200 milliseconds, right ?

Code Snippet
  1. select COUNT(*),COUNT(1) from largeTable
  2. go 100

No, we still get the same average execution time, 100ms (ish).  Why is that ?  Two operations at 100ms each should equal a total of 200ms.  The answer to this, as a lot of answers do, lies within the execution plan.

image

Lets look at the properties for the highlighted compute scalar operation.

image

Both of the output columns are derived from the SAME Expression, Expr1006,  which is the result of the stream aggregate.  Looking at the properties of that we see

image

No mention at all of count(1) anywhere.  Indeed, if we look at the query plan of

Code Snippet
  1. select COUNT(1) from largeTable

we still see the count(*) scalar operator used. 

So not only now have we proved that there is no performance difference between the two,  but to the engine, they are the same operation.

SQLBits 7

For those that may of missed it,  voting for SQLBits sessions is now open.  Make your voice heard and get voting.  It is quite a choice to pick a mere 10 sessions from the nearly 100 session listed, but it will give the committee a really good idea of what YOU want. 

Additionally the pricing has been announced, that has got to be good value considering that pre-cons are coming from the likes of Buck Woody (blog | twitter) , Brent Ozar (blog | twitter) and Rob Farley (blog | twitter).

Using a UDF as a SARG. Make a hash of it!

 I am somewhat uncomfortable blogging this, as I feel that it falls into “Polish a turd” category.  Having said that, it is a particularly ugly turd and it does polish up rather well.  Though as the saying goes, it is still a turd.

   

The evils and inadequacies of UDF (User Defined Functions) in SqlServer are well documented, but there is another scenario using UDF’s that will cause a lot of pain if not carefully controlled.

A performance question on Sql Server Central on joining on UDF’s got me thinking about how sqlserver joins sets of data when the SARG’s are using UDF’s.  This lead to two blog posts by Conor Cunningham (1, 2) and I would urge you to read those. 

In short there are three methods to join data, loop, hash and merge, I’m going to ignore merge for the rest of this post and concentrate on loop and hash.

Consider the following SQL Code :

Code Snippet
  1. Drop function Func1
  2. go
  3. create function Func1(@id integer)
  4.      returns varchar(510)
  5. as
  6. begin
  7.    return cast(@id as varchar(510))--Func1
  8. end
  9. go
  10. Drop function Func2
  11. go
  12. create function Func2(@id integer)
  13.      returns varchar(510)
  14. as
  15. begin
  16.    return cast(@id as varchar(510))--Func2
  17. end
  18. go
  19. drop table #x1
  20. go
  21. Create table #x1
  22. (
  23.   Id integer
  24. )
  25. go
  26. insert into #x1(Id)
  27. select top(10) ROW_NUMBER() over (order by (select null))
  28.   from sys.columns
  29. go
  30.   select* from #x1 a join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)
  31. option(recompile)

How much thought would be given to the mechanics of how the data is joined together?  My guess is not a lot and rightly so. In most scenarios, we trust the optimizer to do its job and generally speaking it does it well.  The major problem with UDF’s is the optimizer is unaware of how much work is involved inside it.  All it knows is that it will return 1 variable and its datatype.  The effort of calculating that, be it string parsing, mathematical equations or more SQL code, it does not consider.  This can have a dramatic consequence if a loop join is chosen to process the query.


In the above select statement, how many executions of the functions “Func1” and “Func2” would you expect?

Here’s the execution plan.

clip_image002[5]

No real clue there, but if you run profiler with a Stored Procedure statement trace you will find that both functions were executed 100 times!  Because we have 10 rows in our source table (#x1) it’s very easy to deduce why 100.  For each possible combination, the functions have both been executed, (10 * 10 =100).  You can do the math if we had 1,000 rows on each side.

So what’s the solution, how can we lower the amount of calls to our function ?  Adding ‘With schemabinding’ causes a slightly different plan, but doesn’t lower the overall quantity of calls to the functions.

What we must do is force a hash join.

select  * from #x1 a inner hash join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)

option(recompile)

That results in the below execution plan.

clip_image002[7]

Now, if we look at our profiler trace we will see each function has only been called 10 times. 

Turd Polished.