27 February 2012 13:04 tonyrogerson

The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

When comparing SSD against Hard drive performance it really makes me cross when folk think comparing an array of SSD running on 3GBits/sec to hard drives running on 6GBits/second is somehow valid. In a paper from DELL (http://www.dell.com/downloads/global/products/pvaul/en/PowerEdge-PowerVaultH800-CacheCade-final.pdf) on increasing database performance using the DELL PERC H800 with Solid State Drives they compare four SSD drives connected at 3Gbits/sec against ten 10Krpm drives connected at 6Gbits [Tony slaps forehead while shouting DOH!].

It is true in the case of hard drives it probably doesn’t make much difference 3Gbit or 6Gbit because SAS and SATA are both end to end protocols rather than shared bus architecture like SCSI, so the hard drive doesn’t share bandwidth and probably can’t get near the 600MiBytes/second throughput that 6Gbit gives unless you are doing contiguous reads, in my own tests on a single 15Krpm SAS disk using IOMeter (8 worker threads, queue depth of 16 with a stripe size of 64KiB, an 8KiB transfer size on a drive formatted with an allocation size of 8KiB for a 100% sequential read test) I only get 347MiBytes per second sustained throughput at an average latency of 2.87ms per IO equating to 44.5K IOps, ok, if that was 3GBits it would be less – around 280MiBytes per second, oh, but wait a minute [...fingers tap desk]

You’ll struggle to find in the commodity space an SSD that doesn’t have the SATA 3 (6GBits) interface, SSD’s are fast not only low latency and high IOps but they also offer a very large sustained transfer rate, consider the OCZ Agility 3 it so happens that in my masters dissertation I did the same test but on a difference box, I got 374MiBytes per second at an average latency of 2.67ms per IO equating to 47.9K IOps – cost of an 240GB Agility 3 is £174.24 (http://www.scan.co.uk/products/240gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-500mb-s-85k-iops), but that same drive set in a box connected with SATA 2 (3Gbits) would only yield around 280MiBytes per second thus losing almost 100MiBytes per second throughput and a ton of IOps too.

So why the hell are “enterprise” vendors still only connecting SSD’s at 3GBits? Well, my conspiracy states that they have no interest in you moving to SSD because they’ll lose so much money, the argument that they use SATA 2 doesn’t wash, SATA 3 has been out for some time now and all the commodity stuff you buy uses it now.

Consider the cost, not in terms of price per GB but price per IOps, SSD absolutely thrash Hard Drives on that, it was true that the opposite was also true that Hard Drives thrashed SSD’s on price per GB, but is that true now, I’m not so sure – a 300GByte 2.5” 15Krpm SAS drive costs £329.76 ex VAT (http://www.scan.co.uk/products/300gb-seagate-st9300653ss-savvio-15k3-25-hdd-sas-6gb-s-15000rpm-64mb-cache-27ms) which equates to £1.09 per GB compared to a 480GB OCZ Agility 3 costing £422.10 ex VAT (http://www.scan.co.uk/products/480gb-ocz-agility-3-ssd-25-sata-6gb-s-sandforce-2281-read-525mb-s-write-410mb-s-30k-iops) which equates to £0.88 per GB.

Ok, I compared an “enterprise” hard drive with a “commodity” SSD, ok, so things get a little more complicated here, most “enterprise” SSD’s are SLC and most commodity are MLC, SLC gives more performance and wear, I’ll talk about that another day.

For now though, don’t get sucked in by vendor marketing, SATA 2 (3Gbit) just doesn’t cut it, SSD need 6Gbit to breath and even that SSD’s are pushing. Alas, SSD’s are connected using SATA so all the controllers I’ve seen thus far from HP and DELL only do SATA 2 – deliberate? Well, I’ll let you decide on that one.

Comments

# re: The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

27 February 2012 16:40 by GrumpyOldDBA

the trouble is that servers are just so behind in design - just look at the speed of memory that most use.  But if you want to think bandwidth consider the limitations of the pci bus and plug in memory cards .. at least the new HP servers have pci 3, I'm hoping memory speed will be up too.

# re: The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

27 February 2012 17:19 by tonyrogerson

PCI Express 16x has 8GiBytes/sec from and to, 16GiByte if you use 32 lanes. That's important if you are using GPU's to offload processing from the convention CPU's, getting data from system memory onto the GPGPU card is the problem.

I don't see any disk based subsystems that are GENERALLY available giving 8GiBytes from/to speed let alone 16GiByte; servers have multiple PCI bus too.

T

# re: The enterprise vendor con - connecting SSD's using SATA 2 (3Gbits) thus limiting there performance

06 March 2012 14:55 by dong

Hi, Tony and others

I'm away from UKSQL for quite a while, today is here to post a blog and saw your post. I was astonished to see your interest on I/O even to today. :)

Just wondering, I just checked Seagate website, their top performing 15K (2.5/3.5) as in specification, can only do sustainable read of 171/202 MB/s, which I think is reasonable, because in 2006 when I was in the I/O fever, top disk can do ~150MB. So how come you get that 347 number? Given later you mentioned another 347, I guess that number is reflecting your BUS/Cache bottleneck, other than spindle speed of the disk, IOMeter is only talking to the Cache, not the plate. Maybe you could use SQLIO for this task?

My best regards,

dong