Archive

Posts Tagged ‘hardware’

Home Lab Server and Storage Consolidation using ESXi 4.1 and Solaris 11

May 30th, 2011 No comments

It’s been too long since my last post, so without further ado…

Recently I was doing some performance testing of my storage server.  Last I wrote about it, I was using OpenSolaris, but I’ve since moved on to Solaris 11 Express.   I wish I had saved the benchmark info, but I believe over cifs I was getting less than 20 MB/s sequential write.   One reason I suspected performance was poor was I was using 1.5tb drives which use the new 4k sector size.  Apparently Solaris has a problem with this.  Without getting too far ahead of myself I confirmed this did contribute to a 18% performance drop.  To remedy the situation I had to use a modified zpool binary from here to set the ashift value to 12 instead of 9.  Unfortunately you have to use this at pool creation time.

One thing that got me fired up about revisiting my lab is I found this article about using VMWare ESXi 4.1 Passthrough.  Given the correct hardware, you can assign a VM direct hardware access.  Which in my case means I would run Solaris in a VM, and attach the SAS card to it for direct access.  Although I might lose some flexibility, the idea of consolidating another two machines into 1 sounded good to me.  I confirmed the current hardware I had could pass through my SAS card to a Solaris VM just fine with some temporary re-jiggering.

I figured while I was changing my configuration, I would upgrade my storage a bit.  And while I LOVE that Lian-Li case for how quiet and sleek it is, there is no getting around the fact it is not going to hold enough drives.  My desired configuration was 6 – 2tb drives for a raidz2, a drive or two for local vm storage, and maybe some room for an SSD for Zil and/or cache.  My current LSI card only had 4 internal ports (4 external additionally, but I didn’t want to deal with adapters).  So I found a Dell PERC 6/i card on craigslist.

New Configuration (new parts I needed in bold):

Part Price
Supermicro X8SIL-O Motherboard (Price actually went up since I bought it) $154.99
4 – Kingston KVR1333D3E9S/2G 2GB 1333MHZ DDR3 ECC $119.98
Antec Three Hundred Case (6 – 3.5″, 3 – 5.25″) $59.82
Intel Xeon X3440 Lynnfield 2.53ghz (Same price as when I bought it) $239.99
Rosewill Green Series RG530-2 530W Continuous @40°C, 80 PLUS Certified,ATX12V v2.3 & EPS12V v2.91 

(No longer available, YMMVfor pricing a different one)

$42.49
Dell PERC 6/i from Craigslist $50.00
6 – Samsung Spinpoint F4EG 2tb 5400rpm HD $480
1 Molex to 2 SATA Power Cable $1.80
Cooler Master 4 in 3 HDD Module $24.52
Cooler Master 120mm Fan 4 in 1 Value Pack $14.21
2 – 32pin SFF-8484 to 4 Sata (ebay) $26.38
Western Digital 150gb VelociRaptor (local vm storage) $114.99
8gb Thumb Drive (ESXi installation) $14.00
Total $1000.38

I find it interesting that when I bought these parts for my last esxi build, the motherboard was slightly cheaper and the processor was exactly the same price.  RAM of course dropped quite a bit.  I grabbed current pricing from newegg, amazon, etc.  I did not include tax, depending on vendor and your location that may or may not apply.

I deliberated quite a bit on the case.  Should I go full blown rack mount server case with hot swap sleds?  I decided to go with a mid-tower case that used 120mm fans for cooling.  I opted to NOT get hot swap sleds.  Although I love the convenience, the fact is you need to push more air with (probably) smaller fans to deal with the added bulk of the hot swap trays.  You’ll notice in the setup I’ve purchased, all drives have 120mm fans in front of them which delivers excellent air flow with the noise of a desktop, not a helicopter server.  The Antec Three Hundred is not a high end Antec case, but it is still good quality.  They included thumb screws for the 6 – 3.5″ drives and cable routing is good.

I have now successfully combined my storage and esxi server.  So far it’s running quite well.  I even got a Kill-A-Watt because I was concerned the additional drives might be pushing the Power Supply.  With 5 VM’s running and mostly idle it draws 105 watts.  When I was doing heavy copies it hit around 140, but that’s no where near the 530 watts the power supply is rated for.

 

 

Categories: Uncategorized Tags: , , ,

New Super Quiet Supermicro X8SIL VMWare ESXi Server

October 22nd, 2009 7 comments

Update: VMWare ESXi 4.1 detects the SATA controller just find.  The separate SAS card is no longer necessary.

The novelty of having a 1U server in my small apartment has worn off. Even on the workstation setting, the tiny fans running at 10k RPM make my home office inhospitable for all but brief periods. I’ve contemplated getting another case or jury rigging up some large low rpm fans, but in the end I decided its best if I just build a new machine and sell the old server on craigslist.

Before I dive into detail here is my parts list. Just add a SATA drive or two and you’re good to go.

Part Price
Supermicro X8SIL-O Motherboard $149.99
4 – Kingston KVR1333D3E9S/2G 2GB 1333MHZ DDR3 ECC $199.96
Lian-Li PC-V351B Case $109.88
Intel Xeon X3440 Lynnfield 2.53ghz $239.99
Rosewill Green Series RG530-2 530W Continuous @40°C, 80 PLUS Certified,ATX12V v2.3 & EPS12V v2.91 $42.49
Used LSI SAS3442E-R PCIe from Ebay $135.99
Tax and shipping $82.55
Total $960.85

I decided the base of my new ESXi system would be the Supermicro X8SIL-O MicroATX motherboard.

Motherboard Negatives:

  • The onboard SATA controller is not detected by ESXi 4.0. Thankfully I have a supported LSI PCI-E SAS card.

Motherboard Positives:

  • MicroATX form factor means I can fit it in a smaller case.
  • 4 – DDR3 slots and can be populated with up to 32gb of RAM.
  • 2 Intel Gig-E NICs which support jumbo frames (Hello iSCSI!). (Most inexpensive boards use Realtek nics which can be flaky under load and are usually not supported by VMWare out of the box.)
  • USB connection on the motherboard that allows you to install your OS to a thumb drive and leave it inside the case.
  • Onboard video means one less thing to buy.

I mated it with the Intel Xeon X3440 CPU which is basically the server version of the i7. This is currently the least expensive quare core intel chip that supports hyperthreading, giving you 8 logical cores.

I opted for 4 sticks of Kingston 2GB DDR3 ECC RAM bringing the total ram to 8gb. It is a downgrade from the 16 I have in the 1U server, but I think it’s worth trading in for a little silence.

To hold this beast, I decided on a small form factor case by Lian-Li. The PC-V351B is a almost square. The fans and drives are mounted with rubber grommets which cut sound and vibration nicely. The quality is top notch, but it’s definitely not the case you want if you plan on swapping parts frequently. It has a motherboard tray which slides out when you need to install cards. If you want to pop the side panels off, get your screwdriver out. Each side is held in place by 6 tiny screws. Thanks to my cat knocking my loose parts tray over during assembly, I only need to worry about 4 per side now. In theory, the motherboard tray sliding out means you shouldn’t need to take the sides off. In practice, this is not always the case.

For the power supply, I selected a mid-range supply from Rosewill. It should be a nice stable supply with enough power.

I only had one minor hiccup assembling the pieces. The Lian-li lead for the Power LED has a 3 pin female connector (with the center pin being unused). The board uses 2 pins side by side for the Power LED. It was an easy fix to gently push out the wire from pin 3 and move it to the unused pin 2. Other than that I had plenty of reach with the cables and had sufficient places to tuck excess cables.

Installing ESXi to a thumb drive was super easy. I just followed these instructions. The thumb drive plugs into the connector inside the case and doesn’t get in the way of anything.

I’ve migrated all my VMs to it and so far so good. The best part is I can’t hear it! It’s so quiet I can hardly tell it’s on. Now, who wants to buy a 1U server?

Categories: Uncategorized Tags: ,

OpenSolaris on Gigabyte GA-P965-S3

June 7th, 2009 No comments

Now that I have a new box to run ESXi, I’ve repurposing my GA-P965-S3 based system for OpenSolaris.  I had a lot of trouble getting this to work.  I was initially using OpenSolaris 2008.11.  I could get it installed.  Reboot, login screen comes up.  I plug in my credentials and as soon as the password entry box dissappeared, lockup.  Mouse stops responding, keyboard stops responding.  I tried every bios setting, disabling everything, etc.  Tried different drives, different video card.  Even tried my LSI SAS card instead of the onboard SATA.  Finally I recalled reading a post somewhere that someone was having issues with 4gb of RAM.  So I brought the system down to 2gb and BAM it worked.  Soon after all this, 2009.06 came out.  I installed that and it worked fine with 4gb of memory.  All 6 onboard SATA ports worked.

For drives, I have 2 – 750gb from my old Win2k3 based fileserver.   I also had the 4 – 1.5tb Seagate drives that came with my Opteron box.   I am allocating 2 – 50gb partitions on the 750gb drives for OS, and carving the rest out for a mirrored data partition.  The 4 – 1.5tb drives are going to be in a raidz.

The OS installer doesn’t allow you create a mirror to start with.  I followed Darkstar’s post on creating a bootable root mirror and it worked great.  You can only do this with slices, not entire disks.  The OpenSolaris installer gives you the option of creating slices or using the entire disk, so remember to use slices if you want to create a mirror.

Creating the raidz is very simple.  In one command I had 4tb of useable storage with all the awesomeness of ZFS and RAIDZ.  I ran some simple benchmarks on a single 500gb drive (non-mirrored) and my new 4tb RAIDZ using FileBench.  The results of the benchmark are below.  This confirms my RAIDZ is quite a bit faster than the single disk.

I started to offload data from my old Win2k3 fileserver onto the new RAIDZ.  I added OpenSolaris to the domain and created a CIFS (windows friendly) share.   Tim Thomas’ blog has a good post on how to do this.  I did find out of the box, it didn’t like my Win2k8 domain controller.  I decided to just remove that machine from my domain while I work out the initial setup.  I’ll probably revisit this later.  Permissions appear to be another tricky part of CIFS I’m going to come back to.

Unfortunately after a few hundred gigs of transfer, 1 of the 1.5 tb drives failed.  The RAIDZ kept on going, but soon after the first drive failure, the second drive started showing errors.   I used the Ultimate Boot CD and confirmed both drives are indeed failing.  1 of which is making click of death noises, the other appears to be on the way to failure.  I opted to go with Seagate’s Advanced Replacement and pay $20 per drive so I could get everything back up and running quickly.  There should be a discount for multiple drives.  Also, paying for this at all on a drive that is a few months old kind of stinks.

Here are the benchmark results:

Throughput breakdown (ops per second)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

196

167

randomread8k

202

173

randomwrite1m

108

55

randomwrite2k

163

128

randomwrite8k

160

127

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Bandwidth breakdown (MB/s)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

0

0

randomread8k

1

1

randomwrite1m

108

55

randomwrite2k

0

0

randomwrite8k

1

1

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Categories: Uncategorized Tags: , ,

VMWare ESXi: GA-P965-S3 and Supermicro AS-1021M-T2+B

May 31st, 2009 2 comments

I have now built a couple ESXi machines at home and it can be tough finding hardware that you know is going to work with ESXi.  I thought I would contribute a couple working configurations.

Motherboard: Gigabyte GA-P965-S3 rev 1.0

When I built this, onboard SATA ports and NIC wouldn’t work with ESX 3.  I couldn’t even get the IDE channel to work when I bought a SAS card to use.  It would boot off the IDE cdrom, get to a certain part and die.  I ended up having to buy a sata cd-rom.  One of the reasons I bought this board is it had 4 pcie ports which would be helpful when none of the onboard items worked.

Storage Controller: LSI SAS3442E-R PCIe

I got a pretty good deal one one of these hunting ebay.  It has an internal and external port.  To use 4 internal SATA disks, you’ll need a sff-8484 to 4 sata sff-8448 cable.

Video: PCI Radeon 7000 card (Important since the LSI card takes up the 1 – 16x PCIe slot)

NIC: Intel Pro/1000 PT Desktop Adapter

When you manage to put together a supported config, ESXi is a very simple install.  If you don’t have supported hardware, it fails and tells you pretty quickly.  I ran both ESX 3.0 and ESXI 4.0 on the above hardware.

Once I got it up and running, it’s been a good system.  I recently caught the ZFS bug though and I needed a new system to allow me to continue running ESXi and another system to start using OpenSolaris.

After scouring craigslist, I found a Supermicro AS-1021M-T2+B system used.  It has a H8DME-2 motherboard.  I was a little concearned about whether or not I was going to have to jump through hoops to get this to work.  I searched a lot about the NVidia MCP55 chipset.  It seemed like I would have to do some work and maybe buy either a new NIC or storage card.  Turns out ESXi 4.0 installs without a hitch.  Both GigE nics are supported as well as the onboard SATA controller.  I even did an informal IOMeter test and I got better iops on this than on my other machine with SAS card.

Now I’m repurposing the Gigabyte machine to be my OpenSolaris machine.  As I’ve come to expect, that’s not going as smoothly as I hoped.  But that’s another post.

Categories: Uncategorized Tags: , ,