Goodbye MediaPortal, Hello XBMC!

It’s been a few years since I first dedicated a computer to be hooked up to my TV. The ideal, a nice pretty interface to access my digital content from. I’ve used MythTV, Freevo, my XBox 360 via TVersity, and MediaPortal.  The last two what being the most recent.  I have a TiVo so I don’t need a device that works with a tuner of any sort.  I can’t comment on the recording abilities of any of these apps.

I abandoned Freevo and MythTV because the setup of MediaPortal can’t be beat.  Install Windows, Install MediaPortal, and install K-Lite Codec pack.  Done.   No linux drivers to fuss with.

I had heard about XBox Media Center (XBMC) and it sounded interesting, but I discounted it because I would need an original xbox to load it on.  But wait!  It turns out I was wrong!  You can install it on an original XBox yeah, but it also works on Windows, Linux, and MacOS X!  Awesome.

The screenshots on the website make a nice case for the nice pretty interface.  But the proof is in the pudding (and I love pudding).  MediaPortal has a separate app to configure all the settings.  It’s a standard windows app and requires getting up close to the TV to see all the options.  XBMC has all the settings right there in the main program.  So setup is little more than navigating the big beautiful menus and hitting enter or escape.  You can tell the intent was to make it easy to setup even if you just have an xbox controller.  It makes for a pleasant experience.

Setup aside, one of the most compelling features for me is how easily you can make it pull down info from IMDB and a few other sources.  It turns a sparse file listing into a rich library with movie posters, plot, and cast info at your fingertips.  In MediaPortal, initial batch downloading of this info was done in the config app.  In XBMC I could still browse my movies while a nice box in the upper right hand corner told me which movies it was processing.  Very nice.

I’ve only dipped my toe in, but I’m sold on XBMC.

Posted in General | Tagged , , | 1 Comment

Suddenly can’t login to OpenSolaris 2009.06 CIFS share

I woke up this morning and couldn’t get onto my CIFS share. A quick look at /var/adm/messages and I saw this problem:

Jun 15 23:10:10 zed idmap[346]: [ID 702911 auth.notice] GSSAPI Error: Unspecified GSS failure. Minor code may provide more information (Clock skew too great)

Ok so this is because the clock on this machine is not close enough to the clock on my domain controller. I’ll just do a ‘crontab -e’ and plug this in:

# Sync date/time with my domain controller
15 * * * * /usr/sbin/ntpdate your.domain.controller.com

Now it should stay synchronized. But wait, I still can’t access my shares.

# svcadm disable idmap
# svcadm disable smb/server
# svcadm enable -r idmap
# svcadm enable -r smb/server

That didn’t do it.

# smbadm list
[*] [MYDOMAIN]
[*] [mydomain.com]

…and proceeds to hang.

# smbadm join -w WORKGROUP
hangs.

# smbadm join -u domainuser mydomain
hangs.

/var/adm/messages shows: svc.startd[7]: [ID 122153 daemon.warning] svc:/network/smb/server:default: Method or service exit timed out. Killing contract 70.

Also noticed despite disabling the smb/server, the process still appears to be running. Kill -9 does nothing.

I had experienced a similar issue earlier during setup and I had written it off. It’s looking like the stability of CIFS isn’t so rocksolid. This post on the cifs-discuss list definitely shows I’m not the only one having issues.

I’m tempted to use VirtualBox and run virtual Win2k3 server on top of OpenSolaris. I would create an iSCSI target in my zpool and point the Win2k3 box at that. Let windows seamlessly share files which it is good at and OpenSolaris manage the storage which it is good at. It’s an interesting thought, but I’m going to see if the latest SXCE fixes my CIFS woes first.

Posted in General | Tagged | 4 Comments

Migrating to an OpenSolaris Fileserver

After getting replacements for my failed drives, I tackled migrating data off my old Windows 2003 fileserver onto my fancy OpenSolaris ZFS fileserver.

My windows server decided this was the time it was going to become corrupt too.  I was using nvraid mirror and it became out of sync.  I wasn’t able to recover it.  My skepticism about cheap built in idea/sata raid has been confirmed.

All my data was still available on other drives though.  I tried attaching them to the OpenSolaris box using the read only ntfs support to copy my data to my big ZFS raidz.  The copy speed was agonizingly slow.  I had over 1tb to copy and I think it would have taken over 48 hours to copy it all.  I ended up putting the drives in a usb enclosure, attaching it to my windows laptop and copying it over the gigE nics.  Surprisingly faster.

This also gave me an opportunity to try out RichCopy as an alternative to robocopy.  As a sidebar, I use robocopy almost every day.  RichCopy includes a GUI which I would assume put behind me pouring over robocopy /? | more when I need to use an option I don’t commonly use.  Unfortunately the interface only emphisizes this was a Microsoft internal tool.  Which is to say, it’s not much better than the command line help.  The item I’m most excited that it add is multithreaded copy.  With just a couple threads I have to believe more bandwidth can be utilized.

To do all that copying, I had to setup the OpenSolaris CIFS service.  Tim Thomas’ post is a good first read.  I did run into a snag with having 1 of my domain controllers be a Windows Server 2008 machine.  Justin Thomas’ experience makes me wonder if a bleeding edge solaris version is in my future.  For now I opted to just demote the server as it was just for testing anyway.

To get the files onto the server, I just followed Tim’s instructions and had a wide open share.  Now that the files are there, I wanted to dial in the permissions.  I liked Steve Radish’s instructions.  I’m used to the old unix chmod. I found the new giant string of alphanumeric characters to implement ACL permissions with a bit daunting.  Steve made me realize you can just use the Windows side to implement the permission, and then use ls -V to see what the effective permission is.  It really helped ease me into it.

I forget at what point, but I ran into an issue where my domain credentials wouldn’t let me see the share. I was seeing this in my /var/adm/messages:

Jun 13 11:01:15 solarbox smbd[2132]: [ID 266262 daemon.error] MYDOMAIN\myusername: idmap failed

The following commands resolved it for me:

svccfg -s idmap setprop config/unresolvable_sid_mapping = boolean: true
svcadm refresh idmap
Posted in General | Tagged , | Leave a comment

OpenSolaris on Gigabyte GA-P965-S3

Now that I have a new box to run ESXi, I’ve repurposing my GA-P965-S3 based system for OpenSolaris.  I had a lot of trouble getting this to work.  I was initially using OpenSolaris 2008.11.  I could get it installed.  Reboot, login screen comes up.  I plug in my credentials and as soon as the password entry box dissappeared, lockup.  Mouse stops responding, keyboard stops responding.  I tried every bios setting, disabling everything, etc.  Tried different drives, different video card.  Even tried my LSI SAS card instead of the onboard SATA.  Finally I recalled reading a post somewhere that someone was having issues with 4gb of RAM.  So I brought the system down to 2gb and BAM it worked.  Soon after all this, 2009.06 came out.  I installed that and it worked fine with 4gb of memory.  All 6 onboard SATA ports worked.

For drives, I have 2 – 750gb from my old Win2k3 based fileserver.   I also had the 4 – 1.5tb Seagate drives that came with my Opteron box.   I am allocating 2 – 50gb partitions on the 750gb drives for OS, and carving the rest out for a mirrored data partition.  The 4 – 1.5tb drives are going to be in a raidz.

The OS installer doesn’t allow you create a mirror to start with.  I followed Darkstar’s post on creating a bootable root mirror and it worked great.  You can only do this with slices, not entire disks.  The OpenSolaris installer gives you the option of creating slices or using the entire disk, so remember to use slices if you want to create a mirror.

Creating the raidz is very simple.  In one command I had 4tb of useable storage with all the awesomeness of ZFS and RAIDZ.  I ran some simple benchmarks on a single 500gb drive (non-mirrored) and my new 4tb RAIDZ using FileBench.  The results of the benchmark are below.  This confirms my RAIDZ is quite a bit faster than the single disk.

I started to offload data from my old Win2k3 fileserver onto the new RAIDZ.  I added OpenSolaris to the domain and created a CIFS (windows friendly) share.   Tim Thomas’ blog has a good post on how to do this.  I did find out of the box, it didn’t like my Win2k8 domain controller.  I decided to just remove that machine from my domain while I work out the initial setup.  I’ll probably revisit this later.  Permissions appear to be another tricky part of CIFS I’m going to come back to.

Unfortunately after a few hundred gigs of transfer, 1 of the 1.5 tb drives failed.  The RAIDZ kept on going, but soon after the first drive failure, the second drive started showing errors.   I used the Ultimate Boot CD and confirmed both drives are indeed failing.  1 of which is making click of death noises, the other appears to be on the way to failure.  I opted to go with Seagate’s Advanced Replacement and pay $20 per drive so I could get everything back up and running quickly.  There should be a discount for multiple drives.  Also, paying for this at all on a drive that is a few months old kind of stinks.

Here are the benchmark results:

Throughput breakdown (ops per second)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

196

167

randomread8k

202

173

randomwrite1m

108

55

randomwrite2k

163

128

randomwrite8k

160

127

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Bandwidth breakdown (MB/s)

Workload

fileio raidz 4 – 1.5tb

fileio 1 – 500gb

multistreamread1m

208

69

multistreamreaddirect1m

204

70

multistreamwrite1m

113

65

multistreamwritedirect1m

105

67

randomread1m

70

21

randomread2k

0

0

randomread8k

1

1

randomwrite1m

108

55

randomwrite2k

0

0

randomwrite8k

1

1

singlestreamread1m

79

39

singlestreamreaddirect1m

76

39

singlestreamwrite1m

119

73

singlestreamwritedirect1m

121

73

Posted in General | Tagged , , | Leave a comment

VMWare ESXi: GA-P965-S3 and Supermicro AS-1021M-T2+B

I have now built a couple ESXi machines at home and it can be tough finding hardware that you know is going to work with ESXi.  I thought I would contribute a couple working configurations.

Motherboard: Gigabyte GA-P965-S3 rev 1.0

When I built this, onboard SATA ports and NIC wouldn’t work with ESX 3.  I couldn’t even get the IDE channel to work when I bought a SAS card to use.  It would boot off the IDE cdrom, get to a certain part and die.  I ended up having to buy a sata cd-rom.  One of the reasons I bought this board is it had 4 pcie ports which would be helpful when none of the onboard items worked.

Storage Controller: LSI SAS3442E-R PCIe

I got a pretty good deal one one of these hunting ebay.  It has an internal and external port.  To use 4 internal SATA disks, you’ll need a sff-8484 to 4 sata sff-8448 cable.

Video: PCI Radeon 7000 card (Important since the LSI card takes up the 1 – 16x PCIe slot)

NIC: Intel Pro/1000 PT Desktop Adapter

When you manage to put together a supported config, ESXi is a very simple install.  If you don’t have supported hardware, it fails and tells you pretty quickly.  I ran both ESX 3.0 and ESXI 4.0 on the above hardware.

Once I got it up and running, it’s been a good system.  I recently caught the ZFS bug though and I needed a new system to allow me to continue running ESXi and another system to start using OpenSolaris.

After scouring craigslist, I found a Supermicro AS-1021M-T2+B system used.  It has a H8DME-2 motherboard.  I was a little concearned about whether or not I was going to have to jump through hoops to get this to work.  I searched a lot about the NVidia MCP55 chipset.  It seemed like I would have to do some work and maybe buy either a new NIC or storage card.  Turns out ESXi 4.0 installs without a hitch.  Both GigE nics are supported as well as the onboard SATA controller.  I even did an informal IOMeter test and I got better iops on this than on my other machine with SAS card.

Now I’m repurposing the Gigabyte machine to be my OpenSolaris machine.  As I’ve come to expect, that’s not going as smoothly as I hoped.  But that’s another post.

Posted in General | Tagged , , | 2 Comments

Expand your virtual machine’s boot drive

A cool use of  (the free program) VMWare converter* is to increase the drive size of the windows boot drive (C:).  Increasing the size of a separate disk in vmware isn’t too hard. One simple way is to attach a new larger vdisk, mount it in windows, robocopy to the new disk, change drive letter assignments, power down, remove old disk. But boot drives are tricky. With VMWare converter, I just clone my VM and ask it to change the disk size. You can even shrink disks with it.

Converter Target Disk

I came across an irksome problem trying to increase the size of one of my VMs. I was using it to convert one VMWare Workstation image to another with a larger C: drive. This resulted in this message:

FAILED: A file I/O error occurred while accessing ‘J:
\VM\UniversalBuildTemplate\W2K3UniBuildTemplate\W2K3UniBuildTemplate.vmdk’.

This seems to be caused by having opted to selecte pre-allocate for the new disk.  Performance is better with pre-allocated disks. Not opting for pre-allocation the problem went away.  I wonder if part of the reason is my source image is not pre-allocated.

One thing Converter seems to have over VMWare Diskmanager GUI is it actually resizes the NTFS partition, not just the virtual disk.   I did use this utility after converter to Convert my disk from dynamic to preallocated.

Also I’d like to mention I used the donation coder program Screenshot Capture for the image above.  Normally I just print screen and fire up Paint.NET, crop, and use red pencil for highlight.  But I think the blur and highlight feature of Screenshot Capture is pretty neat.

*VMWare vCenter Converter Standalone Client Version 4.0.0 Build 146302
Posted in General | Tagged | Leave a comment

Powershell: Invoke-Expression and Tee-Object

So in my previous post I wrote about how I got around a problem with some standard error from Perforce. I want to take my work around function and pipe it into Tee-object so I can simultaneously display and log what’s going on with the execution of my consistency check. A consistency check could take a while, so being able to see it’s running output is important.

In my function, I use Invoke-Expression. Well, turns out it’s not so easy to just pipe that into Tee-Object. At least in PSH V1.

Here is my test PowerShell script:

write-host Running in PowerShell Version $host.version.major
write-host 'Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log'
Invoke-Expression "print.exe /?" | tee-object -filepath test.log
write-host Now lets get the content of test.log
get-content test.log
write-host
write-host 'Executing: Invoke-Expression "print.exe /?" > test.log'
Invoke-Expression "print.exe /?" > test.log
write-host Now lets get the content of test.log 
get-content test.log

Run it inside PowerShell v1:

PS C:\temp> .\test.ps1
Running in PowerShell Version 1
Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
Now lets get the content of test.log
Get-Content : Cannot find path 'C:\temp\test.log' because it does not exist.
At C:\temp\test.ps1:5 char:12
+ get-content  <<<< test.log
 
Executing: Invoke-Expression "print.exe /?" > test.log
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.

The pipe doesn’t work, but standard redirection does. I also tried piping it to out-file. Oddly, the output goes straight to the screen and nothing to the file.

Now let’s try it in v2 CTP3:

PS C:\temp> .\test.ps1
Running in PowerShell Version 2
Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
 
Executing: Invoke-Expression "print.exe /?" > test.log
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp>

In v2, we get the expected result.

I was about to give up on this, but I found this post that describes basically the same issue. The suggested fix is to wrap the Invoke-Expression (or iex) in parenthesis.

PS C:\temp> (Invoke-Expression "print.exe /?") | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp> get-content test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp>

Keep on truckin..

Posted in General | Tagged | 2 Comments

Weird Powershell Standard Error Behavior

Ok I’m baffled. I am working on the previously mentioned powershell perforce backup script. I intentionally corrupted one of my database files to make sure I’m correctly catching the error. This ran me into a brick wall with trying to trap some standard error output from Perforce. At least I think it’s standard error. At this point I could be convinced some new output stream has mysteriously been created that redirects output to some black hole.

Setting the script aside, let’s take a look at the basics. Here’s what happens in a standard dos command window:

F:\p4backup>p4d -r F:\P4ROOT -xv > test.log 2>&1
 
F:\p4backup>type test.log
Validating db.counters
Validating db.logger
Validating db.user
Validating db.group
Validating db.depot
Validating db.domain
Validating db.view
Validating db.review
Perforce server error:
        Database open error on db.have!
        BTree is corrupt!
 
F:\p4backup>

Now let’s try it in powershell:

PS F:\p4backup> p4d -r F:\P4ROOT -xv > test.log 2>&1
PS F:\p4backup> type test.log
Validating db.counters
Validating db.logger
Validating db.user
Validating db.group
Validating db.depot
Validating db.domain
Validating db.view
Validating db.review
PS F:\p4backup>

No error! It mysteriously disappeared. Ok this lead me to believe something funky was up with standard error redirection, so I wrote a little standard error test program:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
 
namespace StandardErrorTester
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is being written to standard output.");
            Console.Error.WriteLine("This is being written to standard error.");
        }
    }
}

Let’s run the test program in a dos command window:

F:\p4backup>StandardErrorTester.exe > error.log 2>&1
 
F:\p4backup>type error.log
This is being written to standard output.
This is being written to standard error.
 
F:\p4backup>

Works as expected. Now let’s try it in powershell:

PS F:\p4backup> .\StandardErrorTester.exe > error.log 2>&1
PS F:\p4backup> type error.log
This is being written to standard output.
StandardErrorTester.exe : This is being written to standard error.
At line:1 char:26
+ .\StandardErrorTester.exe  <<<< > error.log 2>&1
PS F:\p4backup>

The standard error does get wrapped with some strange powershell stuff, but all the output is present. I’m wondering if in the perforce case, the errors are getting interpreted and hidden somehow. Very strange.

Here is simple work around. It’s a function that takes what you want to execute, executes it via a batch script which makes the context the dos command shell. You can still check $LASTEXITCODE and filter the function through pipes or redirection. It’s ugly because it has to create a stub batch file, but it gets around the problem I’m having.

function RunCombineStdErrStdOut
{
    param([string]$commandLineToRun)
    # Get the path where the current script is so we can create
    # our batch file next to it
    $curScript = $script:myInvocation.MyCommand.Path
    $curPath = (Split-Path $curScript)
    # We're going to create a batch file right next to our ps1 script, wherever it lives.
    $runCombBatchFile = $curPath + "\RunCombineStdErrStdOut.cmd"
    if (test-Path -path $runCombBatchFile)
    {
        # Remove the batch file if it exists before we start 
        remove-Item $runCombBatchFile
    }
    # Create our tiny batch script to combine standard error and standard out
    write-Output '@echo off' | out-file $runCombBatchFile -enc ascii -append
    write-Output 'REM This script is auto-created by a PowerShell script' | out-file $runCombBatchFile -enc ascii -append
    write-Output 'REM Modifications will be lost. Do not edit directly' | out-file $runCombBatchFile -enc ascii -append
    write-Output ('REM Script Created by: ' + $curScript.ToString()) | out-file $runCombBatchFile -enc ascii -append
    # %* is a little trick that refers to all the paramters, not just a single one
    # Remove comment for debugging
    #write-Output 'echo Running %*' | out-file $runCombBatchFile -enc ascii -append
    write-Output '%* 2>&1' | out-file $runCombBatchFile -enc ascii -append
    # This will execute the batch file with the command passed to the function
    Invoke-Expression ($runCombBatchFile + " " + $commandLineToRun)
    # You can still check $LASTEXITCODE once this executes
    # it will reflect whatever command we executed
}
 
 
#  Just a simple example to get you going
RunCombineStdErrStdOut "ipconfig /all"
Posted in General | Tagged | Leave a comment

Robocopy hates it when you end with a slash!

Robocopy is a pretty good utility for copyin/mirroring directories in windows.  I use it a lot.  I just spent a bit of time thinking powershell hated robocopy because I was having a lot of trouble calling a robocopy against a directory that had a space in it.  Check out this command line.  Looks pretty innocent to me, but as you can see, robocopy blows its top!

PS H:\temp&gt; robocopy 'h:\temp\test1' 'H:\temp\test space\' /L
 
-------------------------------------------------------------------------------
ROBOCOPY     ::     Robust File Copy for Windows     ::     Version XP010
-------------------------------------------------------------------------------
 
 Started : Wed Jan 28 09:43:27 2009
 
Source : h:\temp\test1\
Dest : H:\temp\test space" \L\
 
Files : *.*
 
Options : *.* /COPY:DAT /R:1000000 /W:30
 
------------------------------------------------------------------------------
 
2009/01/28 09:43:27 ERROR 123 (0x0000007B) Accessing Destination Directory H:\temp\test space" \L\
The filename, directory name, or volume label syntax is incorrect.
PS H:\temp&gt;</span>

Turns out the trailing slash on the path that was making it freak out.  Don’t end with a slash folks.  I’m not even sure why I was because I normally don’t!

Posted in General | Tagged | Leave a comment

PowerShell V1 doesn’t like to exit assignment functions

I was working on a fancy backup script for Perforce written in Powershell and I’ve definitely come across a bug in PowerShell V1.  Take a look at this:

function Usage()
{
	write-host "Hey fool, you need to pass 1 paramter to this utility!"
	exit 1
}
 
if ($args.Count -ne 1)
{
	Usage
}
write-host "Doing stuff..."

So that little snippet, if run without a parameter, will tell you you’re a fool and never tell you it’s doing stuff. I use exit to abort the entire script.  Works as expected.  Now take a look at this:

function GetMyWidget()
{
	# Pretend I couldnt find my widget
	if ($foundWidget -ne $true)
	{
		write-host "Ooops, something went wrong and I want to exit this script!"
		exit 1
	}
	return $widget
}
$w = GetMyWidget
write-Host "Whatever you do, don't print this!"

I expect the above snippet to print out Ooops… and that’s it. Unfortunately this is what it prints:

PS H:\temp> H:\temp\test.ps1
Ooops, something went wrong and I want to exit this script!
The ‘=’ operator failed: System error..
At H:\temp\test.ps1:11 char:5
+ $w =  <<<< GetMyWidget

Whatever you do, don’t print this!
PS H:\temp>

Execute the same code in Powershell V2 CTP3, no error, and it NEVER prints “Whatever you do, don’t print this!”.   The difference between the snippets is I’m assigning a variable to the return value of my function that contains an exit.  Note both environments $errorActionPreference is continue.

That is pretty annoying.  The workaround is to put $script:errorActionPreference = “Stop” before you exit statement, or to make sure that is the preference somewhere.

Posted in General | Tagged | 1 Comment