SQL and the like

Dave Ballantyne's blog. Freelance SQL Server database designer and developer at Clear Sky SQL

Using a UDF as a SARG. Make a hash of it!

 I am somewhat uncomfortable blogging this, as I feel that it falls into “Polish a turd” category.  Having said that, it is a particularly ugly turd and it does polish up rather well.  Though as the saying goes, it is still a turd.


The evils and inadequacies of UDF (User Defined Functions) in SqlServer are well documented, but there is another scenario using UDF’s that will cause a lot of pain if not carefully controlled.

A performance question on Sql Server Central on joining on UDF’s got me thinking about how sqlserver joins sets of data when the SARG’s are using UDF’s.  This lead to two blog posts by Conor Cunningham (1, 2) and I would urge you to read those. 

In short there are three methods to join data, loop, hash and merge, I’m going to ignore merge for the rest of this post and concentrate on loop and hash.

Consider the following SQL Code :

Code Snippet
  1. Drop function Func1
  2. go
  3. create function Func1(@id integer)
  4.      returns varchar(510)
  5. as
  6. begin
  7.    return cast(@id as varchar(510))--Func1
  8. end
  9. go
  10. Drop function Func2
  11. go
  12. create function Func2(@id integer)
  13.      returns varchar(510)
  14. as
  15. begin
  16.    return cast(@id as varchar(510))--Func2
  17. end
  18. go
  19. drop table #x1
  20. go
  21. Create table #x1
  22. (
  23.   Id integer
  24. )
  25. go
  26. insert into #x1(Id)
  27. select top(10) ROW_NUMBER() over (order by (select null))
  28.   from sys.columns
  29. go
  30.   select* from #x1 a join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)
  31. option(recompile)

How much thought would be given to the mechanics of how the data is joined together?  My guess is not a lot and rightly so. In most scenarios, we trust the optimizer to do its job and generally speaking it does it well.  The major problem with UDF’s is the optimizer is unaware of how much work is involved inside it.  All it knows is that it will return 1 variable and its datatype.  The effort of calculating that, be it string parsing, mathematical equations or more SQL code, it does not consider.  This can have a dramatic consequence if a loop join is chosen to process the query.

In the above select statement, how many executions of the functions “Func1” and “Func2” would you expect?

Here’s the execution plan.


No real clue there, but if you run profiler with a Stored Procedure statement trace you will find that both functions were executed 100 times!  Because we have 10 rows in our source table (#x1) it’s very easy to deduce why 100.  For each possible combination, the functions have both been executed, (10 * 10 =100).  You can do the math if we had 1,000 rows on each side.

So what’s the solution, how can we lower the amount of calls to our function ?  Adding ‘With schemabinding’ causes a slightly different plan, but doesn’t lower the overall quantity of calls to the functions.

What we must do is force a hash join.

select  * from #x1 a inner hash join #x1 b on dbo.Func1(a.id) = dbo.Func2(b.id)


That results in the below execution plan.


Now, if we look at our profiler trace we will see each function has only been called 10 times. 

Turd Polished.


Piotr Rodak said:

That's very interesting Dave and clever resolution.

I knew about nasty surprises scalar UDFs can cause and this is another case.


# July 7, 2010 9:10 PM

Paul White NZ said:

You can't polish a turd, but you can roll it in glitter!

# July 27, 2010 2:12 PM

Paul White NZ said:


On further reflection, I think the optimiser does a decent job here.  It may choose loops join for the 10 rows in your sample, but (at least on my 2008 system) it transitions to a hash join at 68 rows, and to a merge join at 122 rows.


# August 8, 2010 6:58 PM

Dave Ballantyne said:


totally agree. The turd here is not the optimizer, but  joining on UDF's.  The optimizer has no idea , and cannot have an idea, of the costs involved in calling and executing the udf. By forcing a hash , all we are doing is softening the blow a little.  The number of rows per se , is irrelevant but is more to do with the statistical relationships between the tables and the percentage of data that is to be joined.

You may like to play with the length of the varchar returned,  that will also change the point when the optimizer change between join strategies.  The overriding factor , to paraphrase Conor Cunningham, is the cost of generating the required hash buckets against the cost of lookups for a loop.  Presumably it also guesstimates the cost of the sort(s) required for merge joins and brings that into play to.

I dont feel as if im telling you anything you dont know already though :)

# August 9, 2010 8:42 AM