Archive

Archive for January, 2009

Powershell: Invoke-Expression and Tee-Object

January 29th, 2009 2 comments

So in my previous post I wrote about how I got around a problem with some standard error from Perforce. I want to take my work around function and pipe it into Tee-object so I can simultaneously display and log what’s going on with the execution of my consistency check. A consistency check could take a while, so being able to see it’s running output is important.

In my function, I use Invoke-Expression. Well, turns out it’s not so easy to just pipe that into Tee-Object. At least in PSH V1.

Here is my test PowerShell script:

write-host Running in PowerShell Version $host.version.major
write-host 'Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log'
Invoke-Expression "print.exe /?" | tee-object -filepath test.log
write-host Now lets get the content of test.log
get-content test.log
write-host
write-host 'Executing: Invoke-Expression "print.exe /?" > test.log'
Invoke-Expression "print.exe /?" > test.log
write-host Now lets get the content of test.log 
get-content test.log

Run it inside PowerShell v1:

PS C:\temp> .\test.ps1
Running in PowerShell Version 1
Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
Now lets get the content of test.log
Get-Content : Cannot find path 'C:\temp\test.log' because it does not exist.
At C:\temp\test.ps1:5 char:12
+ get-content  <<<< test.log
 
Executing: Invoke-Expression "print.exe /?" > test.log
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.

The pipe doesn’t work, but standard redirection does. I also tried piping it to out-file. Oddly, the output goes straight to the screen and nothing to the file.

Now let’s try it in v2 CTP3:

PS C:\temp> .\test.ps1
Running in PowerShell Version 2
Executing: Invoke-Expression "print.exe /?" | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
 
Executing: Invoke-Expression "print.exe /?" > test.log
Now lets get the content of test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp>

In v2, we get the expected result.

I was about to give up on this, but I found this post that describes basically the same issue. The suggested fix is to wrap the Invoke-Expression (or iex) in parenthesis.

PS C:\temp> (Invoke-Expression "print.exe /?") | tee-object -filepath test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp> get-content test.log
Prints a text file.
 
PRINT [/D:device] [[drive:][path]filename[...]]
 
   /D:device   Specifies a print device.
 
PS C:\temp>

Keep on truckin..

Categories: Uncategorized Tags:

Weird Powershell Standard Error Behavior

January 28th, 2009 No comments

Ok I’m baffled. I am working on the previously mentioned powershell perforce backup script. I intentionally corrupted one of my database files to make sure I’m correctly catching the error. This ran me into a brick wall with trying to trap some standard error output from Perforce. At least I think it’s standard error. At this point I could be convinced some new output stream has mysteriously been created that redirects output to some black hole.

Setting the script aside, let’s take a look at the basics. Here’s what happens in a standard dos command window:

F:\p4backup>p4d -r F:\P4ROOT -xv > test.log 2>&1
 
F:\p4backup>type test.log
Validating db.counters
Validating db.logger
Validating db.user
Validating db.group
Validating db.depot
Validating db.domain
Validating db.view
Validating db.review
Perforce server error:
        Database open error on db.have!
        BTree is corrupt!
 
F:\p4backup>

Now let’s try it in powershell:

PS F:\p4backup> p4d -r F:\P4ROOT -xv > test.log 2>&1
PS F:\p4backup> type test.log
Validating db.counters
Validating db.logger
Validating db.user
Validating db.group
Validating db.depot
Validating db.domain
Validating db.view
Validating db.review
PS F:\p4backup>

No error! It mysteriously disappeared. Ok this lead me to believe something funky was up with standard error redirection, so I wrote a little standard error test program:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
 
namespace StandardErrorTester
{
    class Program
    {
        static void Main(string[] args)
        {
            Console.WriteLine("This is being written to standard output.");
            Console.Error.WriteLine("This is being written to standard error.");
        }
    }
}

Let’s run the test program in a dos command window:

F:\p4backup>StandardErrorTester.exe > error.log 2>&1
 
F:\p4backup>type error.log
This is being written to standard output.
This is being written to standard error.
 
F:\p4backup>

Works as expected. Now let’s try it in powershell:

PS F:\p4backup> .\StandardErrorTester.exe > error.log 2>&1
PS F:\p4backup> type error.log
This is being written to standard output.
StandardErrorTester.exe : This is being written to standard error.
At line:1 char:26
+ .\StandardErrorTester.exe  <<<< > error.log 2>&1
PS F:\p4backup>

The standard error does get wrapped with some strange powershell stuff, but all the output is present. I’m wondering if in the perforce case, the errors are getting interpreted and hidden somehow. Very strange.

Here is simple work around. It’s a function that takes what you want to execute, executes it via a batch script which makes the context the dos command shell. You can still check $LASTEXITCODE and filter the function through pipes or redirection. It’s ugly because it has to create a stub batch file, but it gets around the problem I’m having.

function RunCombineStdErrStdOut
{
    param([string]$commandLineToRun)
    # Get the path where the current script is so we can create
    # our batch file next to it
    $curScript = $script:myInvocation.MyCommand.Path
    $curPath = (Split-Path $curScript)
    # We're going to create a batch file right next to our ps1 script, wherever it lives.
    $runCombBatchFile = $curPath + "\RunCombineStdErrStdOut.cmd"
    if (test-Path -path $runCombBatchFile)
    {
        # Remove the batch file if it exists before we start 
        remove-Item $runCombBatchFile
    }
    # Create our tiny batch script to combine standard error and standard out
    write-Output '@echo off' | out-file $runCombBatchFile -enc ascii -append
    write-Output 'REM This script is auto-created by a PowerShell script' | out-file $runCombBatchFile -enc ascii -append
    write-Output 'REM Modifications will be lost. Do not edit directly' | out-file $runCombBatchFile -enc ascii -append
    write-Output ('REM Script Created by: ' + $curScript.ToString()) | out-file $runCombBatchFile -enc ascii -append
    # %* is a little trick that refers to all the paramters, not just a single one
    # Remove comment for debugging
    #write-Output 'echo Running %*' | out-file $runCombBatchFile -enc ascii -append
    write-Output '%* 2>&1' | out-file $runCombBatchFile -enc ascii -append
    # This will execute the batch file with the command passed to the function
    Invoke-Expression ($runCombBatchFile + " " + $commandLineToRun)
    # You can still check $LASTEXITCODE once this executes
    # it will reflect whatever command we executed
}
 
 
#  Just a simple example to get you going
RunCombineStdErrStdOut "ipconfig /all"
Categories: Uncategorized Tags:

Robocopy hates it when you end with a slash!

January 28th, 2009 No comments

Robocopy is a pretty good utility for copyin/mirroring directories in windows.  I use it a lot.  I just spent a bit of time thinking powershell hated robocopy because I was having a lot of trouble calling a robocopy against a directory that had a space in it.  Check out this command line.  Looks pretty innocent to me, but as you can see, robocopy blows its top!

PS H:\temp&gt; robocopy 'h:\temp\test1' 'H:\temp\test space\' /L
 
-------------------------------------------------------------------------------
ROBOCOPY     ::     Robust File Copy for Windows     ::     Version XP010
-------------------------------------------------------------------------------
 
 Started : Wed Jan 28 09:43:27 2009
 
Source : h:\temp\test1\
Dest : H:\temp\test space" \L\
 
Files : *.*
 
Options : *.* /COPY:DAT /R:1000000 /W:30
 
------------------------------------------------------------------------------
 
2009/01/28 09:43:27 ERROR 123 (0x0000007B) Accessing Destination Directory H:\temp\test space" \L\
The filename, directory name, or volume label syntax is incorrect.
PS H:\temp&gt;</span>

Turns out the trailing slash on the path that was making it freak out.  Don’t end with a slash folks.  I’m not even sure why I was because I normally don’t!

Categories: Uncategorized Tags:

PowerShell V1 doesn’t like to exit assignment functions

January 27th, 2009 1 comment

I was working on a fancy backup script for Perforce written in Powershell and I’ve definitely come across a bug in PowerShell V1.  Take a look at this:

function Usage()
{
	write-host "Hey fool, you need to pass 1 paramter to this utility!"
	exit 1
}
 
if ($args.Count -ne 1)
{
	Usage
}
write-host "Doing stuff..."

So that little snippet, if run without a parameter, will tell you you’re a fool and never tell you it’s doing stuff. I use exit to abort the entire script.  Works as expected.  Now take a look at this:

function GetMyWidget()
{
	# Pretend I couldnt find my widget
	if ($foundWidget -ne $true)
	{
		write-host "Ooops, something went wrong and I want to exit this script!"
		exit 1
	}
	return $widget
}
$w = GetMyWidget
write-Host "Whatever you do, don't print this!"

I expect the above snippet to print out Ooops… and that’s it. Unfortunately this is what it prints:

PS H:\temp> H:\temp\test.ps1
Ooops, something went wrong and I want to exit this script!
The ‘=’ operator failed: System error..
At H:\temp\test.ps1:11 char:5
+ $w =  <<<< GetMyWidget

Whatever you do, don’t print this!
PS H:\temp>

Execute the same code in Powershell V2 CTP3, no error, and it NEVER prints “Whatever you do, don’t print this!”.   The difference between the snippets is I’m assigning a variable to the return value of my function that contains an exit.  Note both environments $errorActionPreference is continue.

That is pretty annoying.  The workaround is to put $script:errorActionPreference = “Stop” before you exit statement, or to make sure that is the preference somewhere.

Categories: Uncategorized Tags:

New Laptop and Windows 7

January 20th, 2009 No comments

I just took ownership of a Dell M4400 refurbished laptop. I was excited to bust it out, but my heart sank when I cracked the lid and saw the bezel above the keyboard wasn’t seated. I pushed down and it clicked into place, all but one end. One of the plastic tabs that clicks into a receiving slot on the laptop had broken at some point prior to my taking ownership. I’m surprised Dell didn’t notice this when they were ‘refurbishing’ it. I hopped online with Dell tech support chat and I actually had a rather pleasant experience resolving the issue. It was only a couple moments before I had a live person and only a few more before they told me the part would be here tomorrow. No arm twisting to get them to understand I’m a nerd who is more than capable of installing a plastic bezel. Plus they sent me the OS install disk which I guess I forgot to opt for when I ordered the laptop, gratis. Sweet.

When I picked out the M4400, I specifically opted for one with the eye bleeding resolution of 1920×1080 for it’s 15.4″ screen. I was a tad concerned I maybe had gone too far this time, but it turns out I didn’t have to enable super xtra large blind fonts or anything. Such high resolutions are good for side by side viewing. I think the feature I was most excited about having in a laptop is the back lit keyboard. Even though I’m not a touch typist, I’ve found frequently in the dark I’ve wanted back lighting on my keys so I could find a function key.

The laptop came with Vista 32bit which certainly wouldn’t do. A while back I installed Vista on a machine and reverted back to XP after a few short days. For this machine, I opted to attempt to dual boot Windows Server 2008 64bit and Windows 7 Beta 64bit. I figure 2008 will give me the stripped down performance oriented laptop without the flashy stuff I always turn off. I wanted to also install Windows 7 Beta and see what all the fuss was about. Seems like every tech blog I read is just going ga-ga about it.

That was the plan. In actuality getting Windows Server 2008 to work on the M4400 may not fly in the end. My first stumbling block is a big one. Network connectivity. I can’t seem to get the ethernet connection to work nor the Intel 5300 Wifi. Searching around it looks like I’m not the only one. I found some instructions that claim people have gotten it working, but I am unable to replicate.

No connectivity in 2008 makes the laptop an expensive paper weight these days. I decided to switch gears and install Windows 7. With both Windows 7 and Server 2008, installation has finally become relatively painless. They’ve taken a page from MacOS X and only require a few clicks from the user to get the OS installed. What was really fantastic about Windows 7 is so far all my devices seem to work. Wireless, ethernet, etc. That is a nice treat after wrestling with 2008.

I plan to try to use Windows 7 as my primary OS on the machine. Especially since currently it’s the only OS with connectivity. The first thing that struck me is a new window management feature. The behavior reminds me of a freeware app I’ve used in XP called WinSplit Revolution. In Windows 7 you can drag one window to the left side of the screen and a ghosted window appears to show you how it’s going to snap the size of the window to the left quadrant of the screen if you let go. Same for the right. So it’s easy to get two windows maximized side by side. Hot keys winkey+left arrow, winkey+right arrow makes this a snap!

The other feature I’m on the fence about is the new taskbar. The first thing I do when I get a new XP machine is go into System->Advanced->Performance->Settings and change to adjust for best performance. This turns all the eye candy off and makes the tasbar nice and simple. I also right click on my taskbar, select properties and uncheck group similar icons. I am addicted to having a lot of windows open, and I find grouped windows make finding a particular window more difficult. This may have changed for me in Win7. When I mouse over a set of grouped windows in the taskbar now, I get a nice live preview thumbnail of what’s going on in each of the grouped windows. I can easily click on the large thumbnail of the window I’d like to switch to. I frequently have a number of file explorer windows open and this seems to be making it easy to find the one I want instead of lazily opening yet another explorer window.

Although I haven’t had time to explore this feature more than a cursory check, holly firewall batman. FINALLY a non-neutered firewall that feels first class. When you have a background of using a *nix firewall, like for me, ipfw in FreeBSD, you get quickly frustrated when you find windows barely has a thimble full of those features. Not to mention there isn’t a free firewall on the market I’ve found that will allow you to create a deny all rule and than explicitly allow only certain traffic. Yes I’ve used the ipfw port for windows, but really, why doesn’t something like ipfw exist on windows with a warm fuzzy gui with a price of $0. Well now it seems like it does! Good job Microsoft.

On the other hand, UAC is still there. And although reportedly not as annoying, it still gets in the way.

Guess we’ll see. I haven’t got a lot of experience with Vista, but from listening to the grapevine, Windows 7 might be the real XP upgrade path. Vista may be remembered like Windows ME, a bad memory.

Categories: Uncategorized Tags:

New FinalBuilder Custom Action: Check Required Variables

January 18th, 2009 1 comment

I’ve been using FinalBuilder for a while now. It’s icon based scripting language for creating build scripts. It reminds me me of a multimedia language I used a long time ago called IconAuthor. Initially I had resisted the tool because I thought it might be too easy. But really when you think about creating build scripts, a good portion of it is creating thin wrappers around existing utilities. FinalBuilder does a good job of this and is flexible enough to create build script engines that have their own configuration and work with whatever process you come up with.

To create a ‘subroutine’ or function in FinalBuilder, you have a few options. I like creating new stand alone FinalBuilder project files that are included by my main project. For instance, I’ve created a project that figures out if there have been any changes in the source control since the last build and creates a changelog. This is functionality I can use in any build script. Rather than cutting and pasting it in, I can just reference the project ‘subroutine’ I created for it.

One limitation to this method is there is no such thing as function signature. If your subroutine project requires the parent project to define some variables, you need to add your own logic if you want to make sure some variable has been defined. If you don’t do this, you can get unexpected results and you’ll need to debug what went wrong. “Oh I forgot to define the BuildDirectory variable in my parent script!”

What I decided to do is create some pre-canned functionality to just check to make sure the specified required variables exist and have non-empty value. I actually initially wrote this as a separate FinalBuilder project. I thought about it for a while and decided this might be better as it’s own custom action. FinalBuilder supports developer created actions and even provides a nice IDE to help author them. You have the option of using VBScript, JavaScript, Powershell, and .NET. I went with JavaScript since this action didn’t need to be too complicated.

Along with the Custom Action I built a simple example and an attempt at unit testing the action. There is no built in way to do unit testing in FinalBuilder, but I took at a stab at creating a simple structure to support it. I have 8 tests which are included in a single FinalBuilder project file.

I’ve hosted the project and source at https://launchpad.net/wolf-fbcustomactions. The first release can be downloaded from http://www.wolfplusplus.com/projects/WolfFBCustomActions-0.1.7z.

Categories: Uncategorized Tags:

VS2003 Web App Project and Subversion

January 16th, 2009 No comments

Wow, this problem took way too long to track down. I was creating a build script for a legacy VS2003 Web App project. It is being migrated from Perforce and given an official build script. The command line build was failing. When I opened the solution in the VS2003 IDE, I was plagued with the following dialog:

—————————
Microsoft Development Environment
—————————
Refreshing the project failed. Unable to retrieve folder information from the server.
—————————
OK
—————————

I tried a number of different things. One thing I suspected was maybe IIS was having a problem with the NTFS permissions since my virtual directory was pointing to a different drive. It lead me to a cool post on expert’s exchange which gave instructions on how to use this cool open source utility SetACL to copy permissions from one folder to another. Using robocopy /copyall doesn’t copy inherited permissions, but SetACL can. It wasn’t a permissions problem, but I’m sure I’ll need SetACL at some point.

Googling the erorr message yields advice to delete your C:\Documents and Settings\(username)\VSWebCache folder. This isn’t bad advice, but didn’t solve my particular problem. It turns out VS2003 Web projects don’t like folders that begin with a dot. Subversion by default litters every source controlled directory with hidden .svn folders. Subversion has a FAQ topic about this problem. To make subversion use _svn folders intead of .svn folders, set an environment variable named SVN_ASP_DOT_NET_HACK to any value. You may also want to enable this option in TortoiseSVN.

Very annoying problem solved.

Categories: Uncategorized Tags:

TuneWiz Loosely Coupled, Easily Testable

January 13th, 2009 No comments

A few years ago someone introduced me to the idea of unit testing with NUnit.  Brilliant!  Whenever I develop applications I inevitably write a lot of throw away code.  Lots of code to verify assumptions.  I would simply delete or comment out this code.  With unit testing, I keep all this code and I re-run it to verify any changes I’ve made haven’t invalidated my assumptions.  A while later, another light bulb lit up for me.  These ‘unit tests’ I’d been writing were not unit tests at all!  They were integration tests.  Sure I would test the class in question, but I would create any supporting objects along the way.  At first this double testing of objects seemed like over-engineering, but at some point I realized I was creating a lot of redundency.  Introduce a database into the equation and now I’m doing more testing than necessary and my tests are running slowly because I’m making round trips to a database.   If I want to test business logic, it makes sense that I would want to separate this from testing whether or not I can read and write to a database.

I’m still learning proper unit testing.  While not specific to unit testing, there is in the community at large,  the helpful concept of dependency injection or inversion of control.  One aspect I’ve latched onto is instead of creating ‘new’ objects in a class, the class should take these objects as parameters passed in to your constructor or methods.  Your methods or constructors expect an interface and you leave it up the caller what implementation you want to push in.   Even if you only have one concrete implementation of your interface, this leaves the door open to to create a fake implementation for your unit tests.  Or you can use a mocking framework like Moq or Rhino Mocks to help you do this.   So what this helps you to do is create code that tests a single unit of functionaltiy in isolation.

So now that we have the background out of the way, let’s get to how this applies to TuneWiz.

I have an object called TuneTrack that contains your basic track information.  I created a TuneTrack collection to hold a bunch of these tracks.  I decided to add a GetOrphanTracks method to this collection so I could get all the tracks that iTunes lists in it’s library, but don’t exist on disk.   I can tell a track is an orphan if the Location property of it is null or if it doesn’t exist on disk.  Simple enough. Here’s my first unit test:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[Test]
public void Return_orphan_track_with_location_null()
{
	// Mock TuneTrack created with Moq library
	var mock = new Mock<TuneTrack>(null);
	// Compiler doesnt like it when we simply pass in bare null for Returns
	string nullStr = null;
	// When something accesses our mock implementation's Locaion
	// property, we make it return null
	mock.Setup(x => x.Location).Returns(nullStr);
 
	// Create the TuneCache object we're unit testing
	var tc = new TuneCache();
	// Add the mock TuneTrack to the collection
	tc.Add(mock.Object);
 
	// Make sure we have a 'real' tunetrack
	Assert.IsNotNull(tc.GetOrphanTracks().First());
	// Location of track should be null as reported by our mock object
	Assert.IsNull(tc.GetOrphanTracks().First().Location);
}

If a TuneTrack’s Location property is null, that means it definitely doesn’t exist on disk. Excellent.

Here’s the method in the TuneCache object that makes that test pass:

1
2
3
4
5
6
7
8
9
10
public IEnumerable<TuneTrack> GetOrphanTracks()
{
	// LINQ statement to return orphan tracks
	// In my experience, iTunes seems to return a null when the file doesnt exist on disk
	var orphanTracks = from t in this
				 where t.Location == null
				 select t;
 
	return orphanTracks;
}

Next test we want to create a mock TuneTrack with a real location that does not exist on disk. Here’s where it gets tricky. The old me that didn’t know the difference between an integration test and a unit test would create a TestData directory, fill it with some files and write some tests against those files. Some calls to our static friend File.Exists and whiz bang we’re done! I remind myself I just want to test the TuneCache object. I don’t want to test the File.Exists method.

This is where that IoC stuff comes in. I did some searching to see what other people do about File.Exists type work in regards to testing. Create an interface and in my concrete implementation create a non-static method that calls File.Exists. That seemed to gel with advice that static methods and singletons might be harmful for testing. So I need to refactor my GetOrphanTracks method to take my new IFileTools interface. The big picture here is that I can swap in a FAKE implementation that can be used to simulate files existing or not existing. Here is my one of my unit tests for this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[Test]
public void Return_one_orphan_track_where_location_does_not_exist()
{
	// Mock TuneTrack created with Moq library
	var mockTune = new Mock<TuneTrack>(null);
	// This file probably exists on most systems
	// it doesn't matter though because it's fake
	// no files are harmed or accessed in this method
	var notepad = "C:\\windows\\notepad.exe";
	// when the location of our mockTune is accessed
	// the location will be notepad
	mockTune.Setup(x => x.Location).Returns(notepad);
 
	// Create our real TuneCache
	var tc = new TuneCache();
	// Add our fake tunetrack
	tc.Add(mockTune.Object);
 
	// Create a fake IFileTools that will simulate
	// a file not existing.  This isolates the test
	// and removes the dependency on the file system and File.Exists()
	var fakeFileTools = new Mock<IFileTools>();
	// any call to exists return false
	fakeFileTools.Setup(x => x.Exists(It.IsAny<string>())).Returns(false);
 
	// Get list of orphan tracks using our mock FileTools that will tell
	// us no file exists
	var orphanTracks = tc.GetOrphanTracks(fakeFileTools.Object); 
	Assert.IsTrue(orphanTracks.Count() == 1, "Cache did not contain 1 orphan track");
	Assert.AreEqual(notepad, orphanTracks.First().Location, "TuneTrack did not have expected location on disk");
}

Now let’s look at the new method to make this test pass:

1
2
3
4
5
6
7
8
9
10
11
public IEnumerable<TuneTrack> GetOrphanTracks(IFileTools fileTools)
{
	// LINQ statement to return orphan tracks
	// In my experience, iTunes seems to return a null when the file doesnt exist on disk
	// I believe iTunes may perform it's own Exist, but we'll be extra safe and do our own
	var orphanTracks = from t in this
				 where t.Location == null || fileTools.Exists(t.Location) == false
				 select t;
 
	return orphanTracks;
}

The one thing that feels awkward to me is now in my real code I have to pass in a new ConcreteFileTools() object when I call my get orphans method. It just feels awkward to me. I know there are tools to help with this, but I’m not ready to pick an IoC library. I don’t feel like things are complex enough to warrant it. I was just read this article, Dependency Injection in the Real World. It gave me an idea:

1
2
3
4
public IEnumerable<TuneTrack> GetOrphanTracks()
{
	return GetOrphanTracks(new FileTools());
}

Bang! Now in the ‘real’ world, I don’t have to think about it. I only worry about injecting an IFileTools in test isolation land. Now if I find I need multiple concrete implementations of IFileTools, I will probably switch my approach. But now my unit tests pass and TuneCache doesn’t feel too encumbered. On the other hand, wasn’t I trying to get away from new? Perhaps this is a a hybrid approach for easing slowly into the IoC world.

I have checked the code in for this. Go take a look at the files for the above implementation.

Categories: Uncategorized Tags: , , , , ,

Bazaar TuneWiz Launchpad

January 11th, 2009 No comments

Professionally I have been an administrator for CVSNT, Perforce, Vault, and Subversion.   In my experience, the one area all these systems seem to fall flat is renaming when combined with merging/branching.  All of those systems start to look long in the tooth when you see the new wave of source control systems.  What appeals to me about the distributed source systems is not the P2P type repository, it’s that branching and merging are first class citizens (See Renaming is the killer app of distributed version control ).   The three big players I’ve been watching are Git, Hg (Mercurial), and Bazaar.  All of these systems seem to allow much better branching and merging.

Bazaar’s sell sheet has persuaded me to give it a spin.  In the future I would definitely like to give both Git and Mercurial some real world projects to manage.  Although I can compare and contrast feature sheets, sometimes nothing beats first hand experience.

Since I chose Bazaar as my source control system, I’ve decided to host the TuneWiz development on Launchpad.   Launchpad uses Bazaar as it’s version control system.  Launchpad has tools for managing specifications, bugs, Q&A, etc.  One of the most interesting thing Launchpad offers is the ability for anyone to easily contribute to your project.   From here:

With Launchpad and Bazaar, contributors can create their own branch of your code, make their changes and then push it all back up to Launchpad to be listed right alongside your official branches. And because they never touch your trunk they don’t need to ask for commit access.

Launchpad’s code review and Bazaar’s superb support for merging make bringing the new code and its revision history back into your branch quick and easy.

Here is TuneWiz hosted on Launchpad.

Categories: Uncategorized Tags: , , ,

New Application TuneWiz

January 11th, 2009 No comments

I’ve started a new project called TuneWiz.

Problem: I have a very specific way I  like to process and manage my music files.  I’m warming on the iTunes application, but it doesn’t always work very well with the way I like to manage my music.  My plan is to write an application to address the things that bother me.

Problems:

  1. iTunes does not monitor folders.  If I add a new folder of music, I have to manually add it.
  2. If I move or delete files on disk, iTunes doesn’t remove them from the library.
  3. iTunes stores rating information in it’s database.  I’ve been using Windows Media Player for a while and it stores rating inforation in the mp3 files.  I prefer this method because I can easily move and share music between computers and the ratings go with it.

Solution:

An application that integrates with iTunes.  Adds new files, removes orphans, and syncs rating in iTunes and id3v2 tags.  I have already written code to get comfortable with the iTunes COM interface as well as experiment with some implementation ideas.

Implementation Ideas:

  • One screen for finding orphan files
  • One screen for adding new files.  User adds folders to scan for files
  • One screen to sync rating information.
    • Sync behavior will be to take tracks rated in iTunes and save those ratings to files that are have no rating.
    • Sync behavior will also take files that have ratings and push those ratings to tracks without ratings in iTunes
    • If ratings exist in iTunes and on disk, user will be prompted which rating they want to keep
    • Future enhancement of sync behavior may include implementing a database to store historical information.  This would make rating conflicts easier to solve.

Other applications of note:

Some other applications already exist to do the things I want.  I am doing this as an exercise, but it’s worth noting other application that exist.

iTunes Library Updated – This application solves the first two problems, but it looks like the original developer isn’t sure he’s going to continue development.  It is a GPL project so source is available.

iTunes Folder Watch – This app also solves my first two problems.  It even has the ability to monitor folders.  No source is available.  Some features require a payed license.   Free version has a nag screen.

MusicBridge – This app solves the third problem.  It goes above and beyond and can sync more than just ratings.  No source is available, but app is free.

Categories: Uncategorized Tags: , ,