Archive

Posts Tagged ‘zfs’

Home Lab Server and Storage Consolidation using ESXi 4.1 and Solaris 11

May 30th, 2011 No comments

It’s been too long since my last post, so without further ado…

Recently I was doing some performance testing of my storage server.  Last I wrote about it, I was using OpenSolaris, but I’ve since moved on to Solaris 11 Express.   I wish I had saved the benchmark info, but I believe over cifs I was getting less than 20 MB/s sequential write.   One reason I suspected performance was poor was I was using 1.5tb drives which use the new 4k sector size.  Apparently Solaris has a problem with this.  Without getting too far ahead of myself I confirmed this did contribute to a 18% performance drop.  To remedy the situation I had to use a modified zpool binary from here to set the ashift value to 12 instead of 9.  Unfortunately you have to use this at pool creation time.

One thing that got me fired up about revisiting my lab is I found this article about using VMWare ESXi 4.1 Passthrough.  Given the correct hardware, you can assign a VM direct hardware access.  Which in my case means I would run Solaris in a VM, and attach the SAS card to it for direct access.  Although I might lose some flexibility, the idea of consolidating another two machines into 1 sounded good to me.  I confirmed the current hardware I had could pass through my SAS card to a Solaris VM just fine with some temporary re-jiggering.

I figured while I was changing my configuration, I would upgrade my storage a bit.  And while I LOVE that Lian-Li case for how quiet and sleek it is, there is no getting around the fact it is not going to hold enough drives.  My desired configuration was 6 – 2tb drives for a raidz2, a drive or two for local vm storage, and maybe some room for an SSD for Zil and/or cache.  My current LSI card only had 4 internal ports (4 external additionally, but I didn’t want to deal with adapters).  So I found a Dell PERC 6/i card on craigslist.

New Configuration (new parts I needed in bold):

Part Price
Supermicro X8SIL-O Motherboard (Price actually went up since I bought it) $154.99
4 – Kingston KVR1333D3E9S/2G 2GB 1333MHZ DDR3 ECC $119.98
Antec Three Hundred Case (6 – 3.5″, 3 – 5.25″) $59.82
Intel Xeon X3440 Lynnfield 2.53ghz (Same price as when I bought it) $239.99
Rosewill Green Series RG530-2 530W Continuous @40°C, 80 PLUS Certified,ATX12V v2.3 & EPS12V v2.91 

(No longer available, YMMVfor pricing a different one)

$42.49
Dell PERC 6/i from Craigslist $50.00
6 – Samsung Spinpoint F4EG 2tb 5400rpm HD $480
1 Molex to 2 SATA Power Cable $1.80
Cooler Master 4 in 3 HDD Module $24.52
Cooler Master 120mm Fan 4 in 1 Value Pack $14.21
2 – 32pin SFF-8484 to 4 Sata (ebay) $26.38
Western Digital 150gb VelociRaptor (local vm storage) $114.99
8gb Thumb Drive (ESXi installation) $14.00
Total $1000.38

I find it interesting that when I bought these parts for my last esxi build, the motherboard was slightly cheaper and the processor was exactly the same price.  RAM of course dropped quite a bit.  I grabbed current pricing from newegg, amazon, etc.  I did not include tax, depending on vendor and your location that may or may not apply.

I deliberated quite a bit on the case.  Should I go full blown rack mount server case with hot swap sleds?  I decided to go with a mid-tower case that used 120mm fans for cooling.  I opted to NOT get hot swap sleds.  Although I love the convenience, the fact is you need to push more air with (probably) smaller fans to deal with the added bulk of the hot swap trays.  You’ll notice in the setup I’ve purchased, all drives have 120mm fans in front of them which delivers excellent air flow with the noise of a desktop, not a helicopter server.  The Antec Three Hundred is not a high end Antec case, but it is still good quality.  They included thumb screws for the 6 – 3.5″ drives and cable routing is good.

I have now successfully combined my storage and esxi server.  So far it’s running quite well.  I even got a Kill-A-Watt because I was concerned the additional drives might be pushing the Power Supply.  With 5 VM’s running and mostly idle it draws 105 watts.  When I was doing heavy copies it hit around 140, but that’s no where near the 530 watts the power supply is rated for.

 

 

Categories: Uncategorized Tags: , , ,

OpenSolaris on Gigabyte GA-P965-S3

June 7th, 2009 No comments

Now that I have a new box to run ESXi, I’ve repurposing my GA-P965-S3 based system for OpenSolaris.  I had a lot of trouble getting this to work.  I was initially using OpenSolaris 2008.11.  I could get it installed.  Reboot, login screen comes up.  I plug in my credentials and as soon as the password entry box dissappeared, lockup.  Mouse stops responding, keyboard stops responding.  I tried every bios setting, disabling everything, etc.  Tried different drives, different video card.  Even tried my LSI SAS card instead of the onboard SATA.  Finally I recalled reading a post somewhere that someone was having issues with 4gb of RAM.  So I brought the system down to 2gb and BAM it worked.  Soon after all this, 2009.06 came out.  I installed that and it worked fine with 4gb of memory.  All 6 onboard SATA ports worked.

For drives, I have 2 – 750gb from my old Win2k3 based fileserver.   I also had the 4 – 1.5tb Seagate drives that came with my Opteron box.   I am allocating 2 – 50gb partitions on the 750gb drives for OS, and carving the rest out for a mirrored data partition.  The 4 – 1.5tb drives are going to be in a raidz.

The OS installer doesn’t allow you create a mirror to start with.  I followed Darkstar’s post on creating a bootable root mirror and it worked great.  You can only do this with slices, not entire disks.  The OpenSolaris installer gives you the option of creating slices or using the entire disk, so remember to use slices if you want to create a mirror.

Creating the raidz is very simple.  In one command I had 4tb of useable storage with all the awesomeness of ZFS and RAIDZ.  I ran some simple benchmarks on a single 500gb drive (non-mirrored) and my new 4tb RAIDZ using FileBench.  The results of the benchmark are below.  This confirms my RAIDZ is quite a bit faster than the single disk.

I started to offload data from my old Win2k3 fileserver onto the new RAIDZ.  I added OpenSolaris to the domain and created a CIFS (windows friendly) share.   Tim Thomas’ blog has a good post on how to do this.  I did find out of the box, it didn’t like my Win2k8 domain controller.  I decided to just remove that machine from my domain while I work out the initial setup.  I’ll probably revisit this later.  Permissions appear to be another tricky part of CIFS I’m going to come back to.

Unfortunately after a few hundred gigs of transfer, 1 of the 1.5 tb drives failed.  The RAIDZ kept on going, but soon after the first drive failure, the second drive started showing errors.   I used the Ultimate Boot CD and confirmed both drives are indeed failing.  1 of which is making click of death noises, the other appears to be on the way to failure.  I opted to go with Seagate’s Advanced Replacement and pay $20 per drive so I could get everything back up and running quickly.  There should be a discount for multiple drives.  Also, paying for this at all on a drive that is a few months old kind of stinks.

Here are the benchmark results:

Throughput breakdown (ops per second)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

196

167

randomread8k

202

173

randomwrite1m

108

55

randomwrite2k

163

128

randomwrite8k

160

127

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Bandwidth breakdown (MB/s)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

0

0

randomread8k

1

1

randomwrite1m

108

55

randomwrite2k

0

0

randomwrite8k

1

1

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Categories: Uncategorized Tags: , ,