October 2007 - Posts

Do you use the float datatype?

I thought I’d perhaps blog about this although I don’t truly think this is of any real significance unless you like to use the flat datatype, which I hope you don’t.


I had restored a production backup onto a development box and the guys had run a couple of checks and found a sum of the data returned different values. As this is a financial system the fact there is a difference is important.


My initial thought was that there was something about a float datatype but I couldn’t remember exactly what, my vague memories were that the floating point precision of the cpu affected the accuracy – I should point out that I don’t recommend using the float datatype and always use numeric, however, this is a client database and an answer was required.

I posted and eventually received confirmation:-



A float is an approximately data type.  It will not produce the same results from one processor architecture to another and won't even necessarily do so from one machine to another.  In addition, you are using a float in computations, then you will be compounding rounding factors which magnifies the effect.  If you need the calculations to a high precision as well as also requiring them to be exactly the same regardless of machine or processor architecture, then you need to replace the float data type with a decimal data type.

Michael Hotek Vice-President MHS Enterprises, Inc President FilAm Software Technology, Inc.


Shawn also pointed me to Ken Henderson’s blog , thanks,




However I took Ken’s sample code and ran it across a few servers, on the SQL 2000 servers I got the same results as Ken but I could not get this with SQL 2005 32 or 64 bit so maybe the float isn’t so vague in SQL 2005.


Posted by GrumpyOldDBA with 20 comment(s)

How to monitor SAN performance

I was at the Storage Expo last week having a look around for SAN monitoring tools and to speak to a couple of vendors about their tools, yes I know you'll probably think " how sad " < grin >
Storage performance can impact SQL Server database performance and my experiences have shown that usually the database engine is seen to be at fault rather than the nice shiny and expensive SAN.
I have the first two levels of SNIA certification in Fibre Channel Architecture ,SCSP and SCSE, http://www.snia.org/education/certification for further information, http://www.infinityio.co.uk for UK training provider, I followed this path as I felt I needed to understand more about what goes on behind the scenes. First off it's important to understand that a SAN is not the piece of tin that has all the disks in - this is just storage, the SAN is the whole network from servers to storage and more, the key word to remember is Network ( the N in SAN ).
So enough waffle - performance monitoring a SAN is difficult, vendor tools tend to be expensive, vendor consultants ( in my experience ) don't always tell the truth and it's very difficult to convince anybody that their expensive investment actually performs worse than a workstation.
I was pleased to have a long talk to Laky Hothi from GCH Test and Computer Services Ltd, www.gch-services.com, who were showing a performance dashboard which is vendor independent, a key requirement that SNIA promote, no doubt the solution is not cheap, but hey a SAN is a major investment for a company and should it not perform it may put the business at risk.
You can check out NetWisdom here http://www.gch-services.com/san_netwisdom_express.htm, and open an information sheet here http://www.gch-services.com/Files/Finisar/NetWisdom/8VDtTVLR_NetwisdomExpress[1].pdf which includes a screen shot.
I don't believe there's anyone else in the UK offering this product/service and although a large part of the metrics/technology is out of scope for me as a DBA I was impressed with the product and it's capablities.
I also had a chance to talk to Hitachi Data Systems, they have some very sharp people who understand database performance issues and hopefully we might get Tony to get them along to talk about their technology at a user group meeting as they are keen. I've spoken to HDS previously and always found them both helpful and approachable which is more than I can say for some other vendors. I found they had some excellent whitepapers when I was involved in some storage project work, I did a quick search and here's a couple of sql server references http://search.hds.com/cgi-bin/ss_query?keys=whitepapers%2Bsql+server&sitenbr=157837584&lng=&rgn=&e=&foo.x=24&foo.y=10


It seems my comments in my SQLBits presentation about dropping sys.dm_db_index_physical_stats into a table valued function so it can be used within queries caught the eye of an important person.  Now I'd not really given much thought to what I did, I was working on the index analysis and tried to join the view, which isn't a view after all, to a table variable. I got an error message, throught " oh dear!! " said to myself  " well it's not a tvf so lets try make it one ".

When I was writing my analysing indexes post 2, I added this into my documentation which is how it made it into the SQLBits presentation. Until I was emailed I'd not given any thought to my being wrong in what I had done. If you're now totally lost and wondering what I'm talking about - SQL 2005 introduced the  APPLY  operator, http://technet.microsoft.com/en-us/library/ms175156.aspx  this allows the ability to join across table valued functions.

This is a subset of the code which is in post 2 of the index analysis and shows how sys.dm_db_index_physical_stats gives an error.


CREATE TABLE dbo.Unused_Indexes([object_id] int NOT NULL,index_id int NOT NULL)


insert into dbo.Unused_Indexes

select  object_id,index_id from adventureworks.sys.indexes


now run this code from the user database ( adventureworks in this case ) assumes a dba database called dbalocal


declare @table table(object_id int,ind_name sysname,index_id int)


insert into @table

select top 25 ui.[object_id],si.name,ui.index_id from dbalocal.dbo.Unused_Indexes ui

join sys.indexes si on ui.[object_id] = si.[object_id] and ui.index_id = si.index_id where

objectproperty(ui.[object_id],'IsUserTable') = 1

group by ui.[object_id],ui.index_id,si.name having count(*) =1

order by count(*) desc,ui.[object_id],ui.index_id asc


select  object_name(ui.[object_id]) as TableName,ui.ind_name as IndexName,       

max(fps.index_level)as IndexDepth,sum(fps.page_count) as TotalPages,max(fps.record_count)

as LeafRows, sum(fps.page_count)/128 as Mb

from @table ui

CROSS APPLY master.sys.dm_db_index_physical_stats (DB_ID(),ui.[object_id],ui.index_id,NULL, 'Detailed')

AS fps group by ui.[object_id],ui.ind_name

order by sum(fps.page_count) desc ;


this is the message that is returned


Msg 413, Level 16, State 1, Line 12

Correlated parameters or sub-queries are not supported by the inline function "master.sys.dm_db_index_physical_stats".

Msg 413, Level 16, State 1, Line 12

Correlated parameters or sub-queries are not supported by the inline function "master.sys.dm_db_index_physical_stats".


the entire post / part of my index analysis is at  http://sqlblogcasts.com/blogs/grumpyolddba/archive/2007/07/22/working-with-indexes-part-2.aspx


Simon Sabin blogged about this before me but I wasn't aware of that at the time. The solution to this error is to place the inline function into a user defined tvf, in fact you could do this with your own inline functions if for some reason you couldn't produce a tvf. Please be careful using sys.dm_db_index_physical_stats in this way on very large tables or across large databases, you may find yourself waiting a long time for a result set.





Posted by GrumpyOldDBA with 3 comment(s)
Filed under: ,