Thanks to Dave McMahon of the NxtGenUG for your topic suggestion, I'd recommend people check Dave's group out - looks good!
Setting up SQL Server is a piece of cake, you download the software or put the DVD in and run Setup and it installs it all for you wizard fashion. Ok, you can do that but what do you need to consider before installing and what do you need to consider afterwards?
For your developer machine you probably aren't that bothered but if you are doing something for a client and you are in the hot seat for setting up SQL Server then you should do the best job you can if only to give your client a good experience of the brilliance of your solution ;).
How much memory
Size and speed
SCSI or SATA
NT Allocation Size
Placement of database files
Software and OS
Version of OS
Version of SQL Server
32 or 64 bit
Standalone SQL box or shared Application server
Database files - seperate log and data
Set SQL Server max/min memory
Lets drill into some of these to give you some ideas.
When considering your hardware sit down and work out how many concurrent users, that is to say connections doing work at a single point in time. Its always worth spending the extra couple of hundred £ on a dual core processor, hyper threading in my opinion didn't really bring that much to the table with SQL Server, except confusion! Memory is so cheap its ridiculous so do a capacity planning excercise on your database and size the memory accordingly, remember to include applications and space for the OS, given 1GB of memory I'd personally always make sure that there is around 128MB of memory available to the OS (measured through task manager). Remember SQL Server uses a cache for data and procedure plans so it will eat memory (no its not a memory leak!). Give SQL Server a max or preferably a fixed memory size so its not relinquishing to other applications which will then cause you performance problems - remember, the more you can do in memory the quicker your application will be, if SQL Server has to keep going out to disk to service your query then performance will be slow.
After memory disks have got to be the most important part of a good SQL install, make sure you invest in a good IO subsystem otherwise you are doomed before you start. SATA is almost comparable to SCSI now-a-days as you can see by the massive drop in price of SCSI disks, where SATA has been let down until a year or two ago was disk access times but you can get 10Krpm disks now - Western Digital do the Raptor WD740G which is a 74GB 10Krpm SATA disk with a average seek time of 4.5 ms which is respectable and for just £102 that compared with the SCSI equivalent of the Seagate Cheetah for £125. If you have some spare cache the 15Krpm shaves another 1ms off the seek time, the Fujitsu do one for £225 and a 3.3ms seek time. Just 1.2ms difference per seek could aggregate to quite a lot on some systems that are disk bound. Personally, I take SATA for mid range systems and also for an online backup RAID 1 array but use SCSI 15Krpm for anything that requires the juice.
RAID 5 is dead and buried, people shouldn't be using it anymore, its a cost saving fault tolerence solution - check out the site Battle Against Any RAID Five for more information, I must remember to book mark that site - guessing at URL's always brings up something suprising, I typed barf.org first which is the American American Resistance Front. Hot spares are a good idea if you don't mind spending a few hundred on a redundant disk, the idea is that if any disk in your array should fail then the hot spare is brought into the array and fault tolerance is restored once the mirror is restored (all done while your system is online).
The biggest thing missed is the allocation size, if you do an install of Windows check out the allocation size on the C: (system drive) its just 4KBytes! Keep your databases off the system drive, make your system drive a good few GB's and put your databases on another logical volume that you have formatted with an allocation size that is balanced with your particular disk controller and RAID stripe, personally I use 64KBytes - this is a whole new topic that can be given pages....
Placing database files accordingly can have a dramatic effect on performance, consider a process that is write bound, one of my clients has devices storing event data in a SQL Server so inserts are happening at a phenominal rate of knotts. There are two parts to a database the data files and the log files, writing to the transaction log is sequential in nature and thus its good if the disk head is close or at the postition it last wrote, if it isn't then it starts costing you disk seek time (remember those 4.5ms average seek times.....). For big tables you can get good results splitting the clustered and nonclustered indexes onto seperate file groups on seperate volumes (preferably physical) - it has a big impact on the checkpoint process, thats the process that runs every minute or so writing dirty pages off to disk (remember, when you do an insert/update/delete then the data isn't written immediately, the pages are marked dirt in cache and written latter, its the log that is written immediately).
I always check and mention controller cache to clients, make absolutely sure the controller cache is battery backed up (on the controller) and that doesn't mean if you have a UPS then you'll be ok! It needs a battery on the controller and that battery needs to be ok. Get writing caching wrong and you'll get a slow system, but even worse you can corrupt your database.
Most new hardware bought now has a 64bit chip in it, if you are buying new software then make full use of your 64bit chip. You need to make sure that any printers or devices you have attached have a 64bit driver otherwise you'll have problems with those devices. The main benefit of 64bit is memory, you don't need to mess about with AWE to get SQL Server to use more than 2GB of RAM.
Consider your version of SQL Server carefully, there are many flavours now including a number of different versions of the free Express edition. The version comparison is interesting, alot of people might now get away with workgroup edition, some even Express Edition but remember there is no SQL Agent / job schedular with Express so you'll need to role your own mechanism for doing backups.
If you are sharing the server with applications including the middle tier then size your memory accordingly, don't squeeze SQL Server out, give it plenty of cache so your application queries are mostly resolved from cache rather than having to go out to disk.
In SQL Server 2005 tempdb plays a bigger part, it holds the version store which is used by the new concurrency features but more importantly the inserted and deleted tables that are used in triggers are held there. Always size tempdb properly, don't leave it at the default. Consider if your application will use any of the features that require tempdb usage and size accordingly.
Placement of files is critical as I've mentioned earlier, try and keep the LDF (transaction log) files on their own disks away from tempdb and the application MDF files.
Anyway, I think thats enough to think about. Sorry for the bullet point, brain dump nature of the post but its to give you food for thought, you can easily google for any of the above to get a full description, or just email me and I'll put meat on the bones.