It’s been too long since my last post, so without further ado…
Recently I was doing some performance testing of my storage server. Last I wrote about it, I was using OpenSolaris, but I’ve since moved on to Solaris 11 Express. I wish I had saved the benchmark info, but I believe over cifs I was getting less than 20 MB/s sequential write. One reason I suspected performance was poor was I was using 1.5tb drives which use the new 4k sector size. Apparently Solaris has a problem with this. Without getting too far ahead of myself I confirmed this did contribute to a 18% performance drop. To remedy the situation I had to use a modified zpool binary from here to set the ashift value to 12 instead of 9. Unfortunately you have to use this at pool creation time.
One thing that got me fired up about revisiting my lab is I found this article about using VMWare ESXi 4.1 Passthrough. Given the correct hardware, you can assign a VM direct hardware access. Which in my case means I would run Solaris in a VM, and attach the SAS card to it for direct access. Although I might lose some flexibility, the idea of consolidating another two machines into 1 sounded good to me. I confirmed the current hardware I had could pass through my SAS card to a Solaris VM just fine with some temporary re-jiggering.
I figured while I was changing my configuration, I would upgrade my storage a bit. And while I LOVE that Lian-Li case for how quiet and sleek it is, there is no getting around the fact it is not going to hold enough drives. My desired configuration was 6 – 2tb drives for a raidz2, a drive or two for local vm storage, and maybe some room for an SSD for Zil and/or cache. My current LSI card only had 4 internal ports (4 external additionally, but I didn’t want to deal with adapters). So I found a Dell PERC 6/i card on craigslist.
New Configuration (new parts I needed in bold):
|Supermicro X8SIL-O Motherboard (Price actually went up since I bought it)||$154.99|
|4 – Kingston KVR1333D3E9S/2G 2GB 1333MHZ DDR3 ECC||$119.98|
|Antec Three Hundred Case (6 – 3.5″, 3 – 5.25″)||$59.82|
|Intel Xeon X3440 Lynnfield 2.53ghz (Same price as when I bought it)||$239.99|
|Rosewill Green Series RG530-2 530W Continuous @40°C, 80 PLUS Certified,ATX12V v2.3 & EPS12V v2.91
(No longer available, YMMVfor pricing a different one)
|Dell PERC 6/i from Craigslist||$50.00|
|6 – Samsung Spinpoint F4EG 2tb 5400rpm HD||$480|
|1 Molex to 2 SATA Power Cable||$1.80|
|Cooler Master 4 in 3 HDD Module||$24.52|
|Cooler Master 120mm Fan 4 in 1 Value Pack||$14.21|
|2 – 32pin SFF-8484 to 4 Sata (ebay)||$26.38|
|Western Digital 150gb VelociRaptor (local vm storage)||$114.99|
|8gb Thumb Drive (ESXi installation)||$14.00|
I find it interesting that when I bought these parts for my last esxi build, the motherboard was slightly cheaper and the processor was exactly the same price. RAM of course dropped quite a bit. I grabbed current pricing from newegg, amazon, etc. I did not include tax, depending on vendor and your location that may or may not apply.
I deliberated quite a bit on the case. Should I go full blown rack mount server case with hot swap sleds? I decided to go with a mid-tower case that used 120mm fans for cooling. I opted to NOT get hot swap sleds. Although I love the convenience, the fact is you need to push more air with (probably) smaller fans to deal with the added bulk of the hot swap trays. You’ll notice in the setup I’ve purchased, all drives have 120mm fans in front of them which delivers excellent air flow with the noise of a desktop, not a
helicopter server. The Antec Three Hundred is not a high end Antec case, but it is still good quality. They included thumb screws for the 6 – 3.5″ drives and cable routing is good.
I have now successfully combined my storage and esxi server. So far it’s running quite well. I even got a Kill-A-Watt because I was concerned the additional drives might be pushing the Power Supply. With 5 VM’s running and mostly idle it draws 105 watts. When I was doing heavy copies it hit around 140, but that’s no where near the 530 watts the power supply is rated for.