Sql Server Consolidation and Configuration Part 1 CPU

So i only seem to get inspired when im totally immersing myself in a project. Having been busy for a while with a few projects requiring me to either consolidate or upgrade servers to 2008 and 2008 R2, I finally decided to put my resistance aside and Blog. This particular blog will be devoted to CPU configuration.

CPU affinity

One of the first things I like to do when I begin to consolidate a server, is to calculate how many cpu's it needs allocated to it. Often I create spreadsheet similar to the following to make this more visible:

  Numa Node  0 Numa Node  1 Numa Node  2
  CPU 0  CPU 1 CPU 2 CPU 3 CPU 4 CPU 5
Instance 1 Y Y        
Instance 2     Y Y    
Instance 3         Y Y
Instance 4     Y Y Y Y

On hardware with 16 plus processors, my preference is to reserver CPU 0 for exclusive use by the operating system. When  ever I set CPU affinity for a server, I will also set I/O affinity to the same CPU. I have not yet found a reason to change this.

Now I know some of you may find CPU binding a bit stingy but there are few reasons why I do this:

Less is more

I like to set CPU and I|\O affinity on consolidated servers and servers with many CPU's ( 12-16). My ultimate rule for this is sometimes, less is more. Sometimes we think the more the resources we give something the faster it will run. And sometimes not.

I have seen batch jobs complete 4 times faster on old servers with 4 cpu's than modern servers with 16. Why? NUMA and multiplexing. There is an overhead when accessing memory from foreign NUMA nodes (discussed in more detail below)  Modern processors work most effectivly when they are allocated a similar kind of workload, one where they can easily utilise and reuse the Level 1/2 & 3 cache. Each time a new task is allocated to its processor, it has to reload its cache. It can also be the case (and often is) that  the processor may not be available to give full attention to the process in hand.

The more you get the more you want

I have often seen with consolidation projects that developers quickly forget about being efficient with code. With the advent of new hardware, all the abysmal code that runs like a dog gets forgotton. For a while the problems go away but instead of being contained on its own server, the problem has infected other database with its contention for resources. Thats why I like CPU affinity because I can easily silo off performance issues and again force developers to tune application issues as they arise.

 

Level 2 & 3 Cache

Careful examination of TPC benchmarks will give you a real insight into the effects of Level 2 and 3 Cache. In Essence this cache give ultra fast memory access to the processor to be used for the temporary storing of calculations and lookup data. A larger cache in this area will often give better results than the use of faster processors for Sql Server Processing. Modern HP blade servers have even sacrificed Level 2 Caches for the sake of much larger Level 3 caches. I believe this is worthwhile sacrifice and worth considering when looking for the ultimate bang for your buck.

Non Uniform Memory Access (NUMA)

In most modern hardware now NUMA is enabled. The subject of NUMA is quite large but in short, NUMA governs the way in which memory banks are shared between groups of cores.  Memory access between NUMA nodes is quite slow so the advantage of having Sql Processes access memory on local NUMA nodes is quite big. Using CPU affinity to bind  Sql Instances to  a specific or group of NUMA nodes increases the possibility that the memory will be local. Therefore increasing performance.

When configuring CPU affinity you should always have in mind, which NUMA nodes relate to which CPU's.

 

Hyperthreading

One of my clients today asked me recently whether it was good to use hyperthreading in Sql 2008 release 2 and while my initial answer was no I decided to a little research. Some people I spoke to mentioned it would not be an issue with modern chipsets and others veered away from the subject. As usual with Sql Server the answer came back it depends..... Ultimatley without any compelling reason to use hyperthreading my advice to anyone would be to avoid.

UPDATE

See this blog post about my latest opinion on this issue:

 

 

 

@blakmk

 

Published 29 July 2010 14:00 by blakmk

Comments

No Comments