Thursday, 17 December 2009

iPhone: Forcing a UIView to reorientate

In my current iPhone application I need to force a UIView to reorientate when a certain event happens. The SDK doesn't really allow for this to happen.

There's the well-known UIViewController method
- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation that is called whenever the user rotates the iPhone. If you say YES to that orientation then the OS does the spade work of rotating the display and your views for you. However, you cannot manually ask to rotate the screen, so if you decide that your shouldAutorotateToInterfaceOrientation: peccadillo has changed, there's no way to tell the OS.

The blind alley

There are a number of blog posts, discussions, and Stack Overflow posts that suggest you use the private setOrientation: method of the UIDevice class. This is all well and good apart from two issues:
  1. It's a private API. You won't get into the App store if you use it.
  2. More importantly, it doesn't work.
This seems the canonical version of that method:
[[UIApplication sharedApplication] setStatusBarOrientation:UIInterfaceOrientationLandscapeLeft];
[[UIDevice currentDevice] setOrientation:UIInterfaceOrientationLandscapeLeft];
It serves to rotate the status bar, but leaves your UIView where it is.

How to make it work

Until Apple bless us with an official API for doing this, here's how I managed to achieve my goal...

Once I have decided that I want to orientate a different way, and arrange to return a different shouldAutorotate answer, you must change your window's subview hierarchy. Doing this forces the OS to ask you about your rotation support and wiggle everything around appropriately.

Based on the fact my app has a parent UINavigationViewController at the top, which puts a single subview into the UIWindow, this is what I do:
UIWindow *window = [[UIApplication sharedApplication] keyWindow];
UIView *view = [window.subviews objectAtIndex:0];
[view removeFromSuperview];
[window addSubview:view];
It's a bit unpleasant, but it works. I tried variants such as removing the window and then making it the keyWindow again, however that didn't trick the OS into asking for rotation state again.

I hope this helps you. As ever, if you know a better way of doing this, please let me know!

Tuesday, 15 December 2009

The git equivalent of svnversion

I've been happily using git as my version control weapon of choice for some time now. It's integrated into my automatic build, test, and release scripts neatly. The world is a nice place.

Except, I've not really found a compelling replacement for Subversion's svnversion. I used to use svnversion in my release scripts to name the final build disk image nicely, something like "fooproject-SVNVERSION.dmg", as well as versioning some of the files inside the image.
What is svnversion?
svnversion is a cute little command line tool that looks in the current working directory to determine which revision of a subversion repository you have checked out. If your head revision is 215, and you have all files checked out at HEAD, then running svnverison you'll get:
pete@solomon > svnversion
If you have some older files, then you'd get something like:
pete@solomon > svnversion
And, if your working copy contains some files with uncommited local changed, then you'd get a helpful M added to the end:
pete@solomon > svnversion
It is possible to get something similar to svnversion using git's git describe command. In some ways it is superior to Subversion's simple monotonically incrementing numbers, in some ways inferior. By necessity, it works differently from svnversion.

git describe looks at the git revision you have checked out, and produces the name of the nearest tag followed by the number of commits since that tag, and a partial SHA-1 value of the current commit. This is relatively cute:
pete@solomon > git describe
However, there are issues:
  • Firstly, it doesn't mark whether there are local uncommitted modifications in your local tree.
  • And the partial SHA-1 at the end may not always be valid. If your project gets large enough that more characters are required to unambiguously identify one SHA-1 value all older "git describe"s will no longer uniquely identify a revision.
We can't fix the latter issue, but we can tweak a little to add some "M" love to our git description.

Try this little bit of bash scriptery:
GITVERSION=`git describe`
GITMODIFIED=`(git st | grep "modified:\|added:\|deleted:" -q) && echo "-M"`

That's a little more useful.

If you know a better way of doing this, I'd love to know!

Friday, 11 December 2009

Speaking: ACCU 2010

The ACCU 2010 conference schedule has been ceremoniously unvieled on the ACCU website, here. It looks as strong as ever.

This year I will be presenting two whole sessions for your delight and delectation.
  • The first, poetically entitled Stood at the Bottom of the Mountain Looking Up, is an investigation into how to quickly get up to speed with new technologies, languages, problems, etc. It'll be an interesting and practical "soft skills" kinda thing. I'm not sure how to weave inappropriate imagery into the talk yet, but I have plenty of time to work on it.
  • The second, snappily entitled iPhone development is a talk about, well, iPhone development. It's a co-presented session with the esteemed Mr Phil Nash. Goodness knows how we'll cover the ground in a mere 90 minutes! I'm really looking forward to this one.
I should probably do something about the biography that's currently up there. "Pete owns some shoes. But won't wear them." is accurate but perhaps not too descriptive.

Thursday, 10 December 2009

Boost on the iPhone

This is the simple way to get Boost into your iPhone code.

I've been porting a large C++ project to the iPhone. It uses the excellent Boost libraries. Building Boost for the iPhone is not impossible, just a bit of a pain in the arse.

There are a number of good examples of how to do this online, for example the Backstage blog entry here and Matt Galloway's blog here. They are useful hints that help you work past the impenetrable Boost Build documentation.

However, the story does not end here. Those instructions allow you to build a set of libraries for the simulator, or for the iPhone OS. But not both. This means that your Xcode project setup gets fiddly with different link paths for the different targets.

You can solve this by creating a "universal" fat library. The lipo tool can be used to shunt the individual libraries together. Not tricky, just another step.

Now, for bonus points it would be sweet to construct a "Framework" for the Boost libraries, allowing you to use them in Xcode like any other iPhone framework. I've already blogged on how to do this here.

Of course, if you were sensible, you'd wrap this up in a script so that anyone can use it. A script a bit like this one.

I've set up a Gitorious project for this script. Feel free to use it.

Friday, 13 November 2009

Writing: Respect the Software Release Process

The November issue of ACCU's CVu magazine is out now. It contains my latest Professionalism in Programming column, "Respect the Software Release Process".

I felt a little Wordly when I designed the cover for this issue. Thanks to the postal strike in the UK, I got the printer's cover proof for the magazine after the full printed magazine landed on my doormat!

Monday, 9 November 2009

Book Review: iPhone Games Projects

Name: iPhone Games Projects
Author: Dave Mark PJ Cabera
Publisher: APress
Price: $39.99
Pages: 258
Reviewed by: Pete Goodliffe
Verdict: OK
This is the second book I've reviewed in this Apress iPhone series (the first being "iPhone Cool Projects"). The book has many of the characteristics of the first: it is full-colour throughout, contains clear writing, beautiful presentation, and relatively good copy editing. It hangs together about as well as the other book, too, which is "mostly".

It is a series of 8 distinct essays by different "expert" (a relative term on such a new platform) iPhone game developers. The tone and approach of each chapter is therefore different.

The collection of topics covered is OK, but doesn't spread over the entire broad spectrum of game topics: there are TWO essays on networking, TWO essays on optimisation, one on multi-platform development (interesting in an "iPhone" book), one on writing a design document, and a walkthough of a simple board game.

There are many more topics that might have been interesting chapters to have in this type of book: a 3D graphics primer, when/how to select a third party games engine, considerations for getting your game noticed in the app store, and more.

There are some recurring themes: a few authors suggest prefering C over Objective-C (for obvious reasons). There is some discussion of why C is "better" than C++ which is (to a C++ programmer) unbalanced, and misleading.

As ever, the source code to each project is available from the Apress website. The quality of some of the code is quite variable.

If you want to write an iPhone game this book might be an interesting read, but I wouldn't suggest that every iPhone game programmer HAS to buy it. Some sections of it have far more value than others. In fact, I think overall you'd get more milage from the "iPhone Cool Projects" book since it covers a broader range of topics. I'm left feeling that the two books rolled into one would probably have been a better product. And I'm still not convinced that the title is even gramatically correct.

Thursday, 5 November 2009

Book Review: Pro Git

Name: Pro Git
Author: Scott Chacon
Publisher: APress
Price: $34.99
Pages: 265
Reviewed by: Pete Goodliffe
Verdict: Highly Recommended
It's not often I start a book review with glowing praise. This time, I will: if you use the git ( version control system, or are thinking of using git in the future, get this book. It's excellent.

Pro Git available online from (or git clone the book's source from This means that you can read it for free before considering a purchase. Indeed, that's where I started. However, I highly recommend the dead tree version. Apress' production quality is excellent and the paper copy is defintely a valuable thing to have.

The book is an excellent introduction to using git; it's perfect for newbies, and a good reference for existing users. It starts from first priniciples. That is, it describes what git is, and what a distributed version control system is. It briefly introduces version control in general, but that is really prerequiste information.

The text is well paced, and very clearly written. The examples are well chosen and the coverage of git's facilities is broad.

The author starts with installing/configuring git and outlines the basic git principles. He covers basic operations (check in, clone, viewing logs, tagging). Then he moves onto git's crowning glory: branching and merging. This potentially tricky topic is covered very well.

The book also covers running a git server, sensible workflows to tame distributed collaboration, useful/advanced git facilities (stashing, amending history, binary searches, subtree merging, client- and server-side hooks), and using git with other version control systems. In particular, there is good coverage of using git as a more advanced subversion client.

The final chapter is particularly useful: a great overview of git internals. This sounds relatively pointless when you've covered most git usage already. However, this is a great chapter - the author explains what's going on under the covers in such a way that you gain a much better insight into how all the high-level git operations work.

Wednesday, 21 October 2009

Book Review: iPhone Cool Projects

Name: iPhone Cool Projects
Author: Gary Bennet et al
Publisher: APress
Price: $39.99
Pages: 209
Reviewed by: Pete Goodliffe
Verdict: Recommended
iPhone programming is one of the current "hot topics" and we're seeing an increasing number of books published on this topic. This one is a bit of a mixed bag.

This is not an introductory tome; it requires significant prior understanding of the iPhone toolset and development environment. Instead, the book presents a number of complete fully-working iPhone applications covering various core iPhone technologies. It fits into a series of other Apress iPhone titles. Not having seem the other books, I can't say how well it complements the other titles in the series.

The book is effectively a collection essays by many authors, one per chapter, all "experts" at various aspects of iPhone development. Some of them have produced very successful iPhone applications.

The topics covered are: simple game programming, peer-to-peer networking, multi threaded applications, creating multi-touch interfaces, physics and 2D animation libraries, audio streaming, and creating a location-aware application with navigation-based UI. There are no topics covered that you can't fathom relatively easily from the free Apple documentation and a bit of careful thought. However the useful piece of the jigsaw is seeing how other developers have already learnt iPhone OS and solved the common problems.

The production quality of the book is high. It has been very well presented, in full colour throughout, with many iPhone and Xcode screen shots. On the whole, the writing is good. Some chapters appear to have been better proof read than others.

Perhaps the most useful part of the book is the availability of the source code for all the sample applications (from the publisher's website), so you can run and take apart the projects at your leisure. There is no bundled CD, and I'm more than happy with that.

As with many such multi-authored books, some chapters are better than others. Each chapter is relatively short, and they all basically provide an overview of their topic - enough to pique your interest, but not enough to answer any serious questions. For some topics this works better than others.

Highlights are the first game-writing chapter, the multi-touch interface chapters, and the location-based application chapter. These present useful information about how to write a "real" iPhone application. I felt let down by the threading chapter which presents a fairly glib and un-thorough overview of the perils of writing threaded apps. The networking chapter is a very simplistic introduction; nothing it says is wrong, but to write your own serious networked application you'd really need to know a lot more about network technologies.

If you're an experienced programmer who wants a casual introduction to some more meaty iPhone projects than you've seen in the introductory tests, this book may be interesting for you. It's easy to read, fast paced, and pretty.

Friday, 25 September 2009

Code: iPhone linker error (__restore_vfp_d8_d15_regs)

In my iPhone development I encountered link errors along the lines of:
"___restore_vfp_d8_d15_regs", referenced from:
-[Blah blah:] in blah.o
"___save_vfp_d8_d15_regs", referenced from:
-[Blah blah:] in blah.o

Google had a little to say about it. Unfortunately, it all appeared wrong.

The issue was a subtle one. The application's "Library search paths" (LIBRARY_SEARCH_PATHS) variable has some historic cruft in it which caused the linker to pull in and old incompatible version of libstdc++, with a consequence of much hilarity and hair-pulling.

When I removed "$(SDKROOT)/usr/lib/gcc/arm-apple-darwin9/4.0.1" from the list (not sure where this had come from) the application magically linked one more, and kittens and puppies danced with my iPhone once more.

Thursday, 24 September 2009

Code: Respect the Software Release Process

Several times recently I've run into problems caused by other developers' lackadaisical approach to the construction of software releases.

Many of these were caused by the sloppy habit of creating a release of a local working directory, rather than from a clean checkout.

For example:
  • An external software release was made from a local directory containing uncommitted source file changes. We have no record of exactly what went into that build. And knowing it was built like this, I have no faith in the quality of the release at all.
  • An external software release was made from a local directory that wasn't up-to-date. So it was missing one feature, and some bug fixes. But the developer tagged the HEAD of the repository, and then claimed he'd build that version. The built code begged to differ.
I mean, come on! It's not that hard, is it?

Well, actually: yes it is. Creating a serious high-quality software release is actually a lot more work than just hitting "build" in your IDE and shipping whatever comes out. If you are not prepared to put in this extra work then you should not be creating releases.

Harsh. But fair.

Respect the Software Release Process

Presuming that you are writing software for the benefit of others as well as yourself, it has to get into the hands of your "users" somehow. Whether you end up rolling a software installer shipped on a CD or deploying the software on a live web server, this is the important process of creating a software release.

The software release process is a critical part of your software development regimen, just as important as design, coding, debugging, and testing. To be effective your release process must be:
  • simple
  • repeatable
  • reliable

Get it wrong, and you will be storing up some potentially nasty problems for your future self. When you construct a release you must:
  • Ensure that you can get the exact same code that built it back again from your source control system. (You do use source control, don't you?) This is the only concrete way to prove which bugs were and were not fixed in that release. Then when you have to fix a critical bug in version 1.02 of a product that's five years old, you can do so.
  • Record exactly how it was built (including the compiler optimisation settings, target CPU configuration, etc). These features may have subtly affects how well your code runs, and whether certain bugs manifest.
  • Capture the build log for future reference.

The bare outline of a good release process is:

  • Agree that it's time to spin a new release. A formal release is treated differently to a developer's test build, and should never come from an existing working directory.
  • Agree what the "name" of the release is (e.g. "5.06 Beta1" or "1.2 Release Candidate").
  • Determine exactly what code will constitute this release. In most formal release processes, you will already be working on a release branch in your source control system, so it's the state of that branch right now.
  • Tag the code in source control to record what is going into the release. The tag name must reflect the release name.
  • Check out a virgin copy of the entire codebase at that tag. Never use an existing checkout. You may have uncommitted local changes that change the build. Always tag then checkout the tag. This will avoid many potential problems.
  • Build the software. This step must not involve hand-editing any files at all, otherwise you do not have a versioned record of exactly the code you built.
  • Ideally, the build should be automated: a single button press, or a single script invocation. Checking the mechanics of the build into source control with the code records unambiguously how the code was constructed. Automation reduces the potential for human error in the release process.
  • Package the code (create an installer image, CD ISO images, etc). This step should also be automated for the same reason.
  • Always test the newly constructed release. Yes, you tested the code already to ensure it was time to release, but now you should test this "release" version to ensure it is of suitable release quality.
  • Construct a set of "Release notes" describing how the release differs from the previous release: the new features and the bugs that have been fixed.
  • Store the generated artifacts and the build log for future reference.
  • Deploy the release. Perhaps this involves putting the installer on your website, sending out memos or press releases to people who need to know. Update release servers as appropriate.

This is a large topic tied intimately with configuration management, testing procedures, software product management, and the like. If you have any part in releasing a software product you really must understand and respect the sanctity of the software release process.

Friday, 18 September 2009

Code: Creating a framework for the iPhone

This article explains how to build your own framework for Apple's iPhone OS.

The problem

Apple's Xcode development environment does not let programmers create their own framework for use in iPhone OS applications. This has caused many iPhone developers great frustration, although the restriction is for fairly sensible reasons.

So why the restriction?

A framework usually contains a dynamically loaded shared library (and the associated header files to be able to access its facilities). iPhone OS keeps applications very separate from one another, and so there is no concept of a user-created dynamic library shared between applications. There is no central library install point accessible to the developer. Indeed, managing such a software pool would be rather complex on iPhone-like devices, and preventing developers from installing their own shared frameworks neatly sidesteps a whole world of painful shared library compatibility issues, and well as simplifying the application uninstall process.

Its one, fairly final, way to avoid DLL hell!

All applications may link to the blessed, system-provided frameworks. The only other libraries they may use must be standard static libraries, linked directly to the application itself.

Those of us who'd like to supply functionality to other users in library form are left at somewhat of a disadvantage, though. Most developers are used to the simplicity of dragging a framework into their application target in Xcode, and not worrying about header paths or link issues.

It's nowhere near as neat to have to provide a static library and a set of associated header files in a flat directory. This requires your clients to work out the installation in their application for themselves. It's not hard, but it is tedious. You've also got to ship a library version for each platform the developer will need (at the very least, an arm code library for use on the iPhone OS device itself, and an i386 build for them to use in the iPhone simulator).

It's clumsy.

But fear not, there is a way...

How to build your own framework

I've worked out how to create a usable Framework that you can ship to other iPhone OS application writers. You can ship libraries that are easy to incorporate into other projects, and can exploit the standard framework versioning facilities.

There is one caveat: the framework will not be a standard shared library, it will provide a statically linked library. But the application writer need not be concerned about this issue. As far as they're concerned everything will just work. We are using Apple machines, after all.

Here's how to do it:

1. Structure your framework's header files.

Let's say your library is called "MyLib". Structure your project with a
top-level directory called "Include", and inside that make a "MyLib" subdirectory. Put all your public header files in there.

To be idiomatic, you'll want to create an umbrella header file "Include/MyLib/MyLib.h" that includes all the other headers for the user's convenience.

Set up your Xcode project "Header Search Paths" build parameter to "Include".
Now your source files can happily #import <mylib/mylib.h> in the same way they'd use any other framework. Everything will include properly.

2. Put your source elsewhere

I have a "Source" directory containing subdirectories "Source/MyLib" and "Source/Tests". You can put your implementation files (and private header files) wherever you want. Just, obviously, not in the Include directory!

3. Create a static library target

Create an iPhone OS static library target that builds all your library sources. Call it MyLib, and by default it will create a static library called libMyLib.a.

4. Create the framework plist file

Create a plist file that will be placed inside your framework, describing it.

I keep mine in Resources/Framework.plist. It's a peice of XML joy that should look like this:
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "">
<plist version="1.0">
5. The magic part... build your framework by hand

Create a shell script to build your framework. I have a "Scripts" directory that contains it, because I like to keep things neat like that.

The first line is the canonical hashbang:
There are two parts to this file...

5a. Build all the configurations that you need your framework to support

This must be at least armv6 for the device, and x386 for the simulator. You'll want these to be Release configuration libraries.
xcodebuild -configuration Release -target "MyLib" -sdk iphoneos3.0
xcodebuild -configuration Release -target "MyLib" -sdk iphonesimulator3.0

So that's our libraries built. Now...

5b. Piece it all together

With a little understanding of the canonical structure of a framework directory, our ability to write a plist, and the knowledge that putting a static library in the framework instead of a dynamic library works fine, you can create your framework like this (apologies that blogger has kinda killed the formatting here):

# Define these to suit your nefarious purposes

# Where we'll put the build framework. The script presumes we're in the
# project root directory

# Clean any existing framework that might be there already
echo "Framework: Cleaning framework..."

# The full name of the framework we'll build

echo "Framework: Setting up directories..."
mkdir -p $FRAMEWORK_DIR/Versions
mkdir -p $FRAMEWORK_DIR/Versions/$FRAMEWORK_VERSION/Resources

echo "Framework: Creating symlinks..."
ln -s Versions/Current/Headers $FRAMEWORK_DIR/Headers
ln -s Versions/Current/Resources $FRAMEWORK_DIR/Resources

# Check this is what your static libraries are called

# The trick for creating a fully usable library is to use lipo to glue the
# different library versions together into one file. When an application is linked
# to this library, the linker will extract the appropriate platform version and
# use that.
# The library file is given the same name as the framework with no .a extension.
echo "Framework: Creating library..."
lipo \
-create \
-arch i386 "$FRAMEWORK_INPUT_I386_FILES" \

# Now copy the final asserts over: your library header files and a plist file
echo "Framework: Copying assets into current version..."
cp Resources/Framework.plist $FRAMEWORK_DIR/Resources/Info.plist
That's it. Run that script and you'll create a framework in "build/Framework". In there is MyLib.framework. This directory can be shipped to your external application developers. They can incorporate it into their iPhone OS applications like any other framework.

Congratulations, you are now a hero.

Other remarks

I have presented here the most basic structure of a shell file. My production version includes more robust error handling, and other facilties that are relevant to my particular project.

I also have a build script that automatically creates documentation for the framework that I can ship with it. Indeed, I have a release script that applies versioning information to the project, build the libraries, creates a framework, assembles the documentation, compiles release notes and packages the whole thing in a pretty DMG. But that's another story.

If calling scripts from the command line scares you, you may chose to make a "Run Script Build Phase" in your Xcode project to call your framework script. Then you can create a framework without having to creep to the command line continually.

In summary, the final file layout of my project looks like this:

I hope you have found this tutorial useful. Let me know what frameworks you manage to build.

Friday, 4 September 2009

Code: 97 Things Every Programmer Should Know

The development website for O'Reilly's latest project in it's "97" series has now been made public.

97 Things Every Programmer Should Know looks like it will be an interesting book. Edited by Kevlin Henney, it contains a series of 97 (I'm not quite sure why 97 exactly, its a fairly weak gimmick) very short, pithy pieces of advice aimed at software developers to help them craft better code.

I have submitted a number of entries to this project.

You can take a look at all the entries submitted so far, and perhaps contribute something yourself. The site is a Wiki, and is now open for collaboration by the wider programming community.

Wednesday, 19 August 2009

Code: How to spell svnversion in git

I've been increasingly using git to clone and work on remote Subversion repos, as it's tedious to perform VCS operations over the atlantic when your web connection is tantamount to IP over carrier pigeon.

One operation I require in order to write sane autobuild scripts is something akin to svnversion. I need it to work for me, in my git svn-cloned repo, and for the mortals in other offices using plain ol' svn.

Took me a while, and here's what I came up with.

# Returns the svn version number of the current directory.
# Works in an svn working copy, or in a git svn clone of an svn repo.
alias git_svnversion="git svn find-rev `git log -1 --pretty=format:%H 2>/dev/null` 2>/dev/null"
if [ "X$SVNVERSION" == "Xexported" -o "X$SVNVERSION" == "X" ]; then

echo $(get_svnversion)
Of course, there are some unpleasant dependencies on the particular version of svnversion you have installed and it's output on a non-svn tree. You could probably make this a little more bullet-proof if required.

Oh, and before you ask: blogger ate my formatting.

Wednesday, 15 July 2009

How to move your iTunes media onto a new hard disk

I run iTunes on Mac OS. I'm at the latest version (8.2 at the time of writing).

I have filled a hard disk with 11,000 tracks, many videos, podcasts subscriptions, and applications. I like to keep a separate hard disk for all my iTunes use, distinct from the main drive. This drive contains the audio, video, and the iTunes database files.

Having filled up my media disk, it was time upgrade to a larger disk in the same machine. It's not too hard if you let iTunes manage your music manually, just follow this HOWTO.

However, I like to manage my files manually. I have a directory structure for files which separates them according to use (I use my media with programs other than iTunes, but you need iTunes to sync with iPods). iTunes permits, but doesn't like this.

Moving the iTunes media onto a new disk is hard because:
  • iTunes does not cope well with its database files being moved
  • I sync with a number of iPods and an iPhone. Don't want to lose sync with those devices.
  • The iTunes database file is a closed binary format, and not easy to edit.
(On the mac, at least) iTunes is clever enough to track file movements on the same disk. You can rearrange your media files, and iTunes won't get confused. This is true for HFS+ at least (perhaps iTunes is tracking files by ID rather than filename, I never worried how it works). However, iTunes cannot cope with files moved between disks, which makes migrating your iTunes database somewhat complex.

There are a number of good HOWTOs on the net for this kind of thing (see the references at the end), but they all didn't quite describe what I wanted to do exactly.

So here's my HOWTO. If you know any better versions of these steps, let me know.

  1. You have installed the new drive, and formatted it, etc.
  2. You can see new drive at same time as old drive.
  1. Quit iTunes. Best not to have it updating the database or downloading podcasts whilst you're working on it!
  2. Copy entire contents of old media disk to new one, including all your media and the "iTunes" directory full of database files, application downloads, podcasts, etc.
  3. For safety's sake, I renamed the "iTunes" folder on the old disk to try to prevent iTunes from using it again and confusing matters. Based on iTunes cleverness, it might have spotted the rename magically - perhaps it would have been better to archive the old "iTunes" directory and delete the original?
  4. Go to new media drive. Look in iTunes directory. Open the "iTunes Library" file - it's iTunes's binary-format database. Open it with a text editor, select everything in it, and delete it all. Save the file. Ensure it has size zero.
  5. Open the "iTunes Library.xml" file in the same directory in a text editor (this is a XML human-readable version of most of the data in the database). Do a global search and replace for all "/Volumes/OldMediaDriveName" to "/Volumes/NewMediaDriveName" (changing those names appropriately, obviously).
  6. Start iTunes with the Option (alt) key held down. It asks for you to provide the location of a new iTunes data file. Select the new drive's iTunes directory.
  7. Get ready for a long wait. iTunes will now rebuild it's binary database from the XML file. For a large database this takes a VEEEEERY long time. I was waiting for over 30 minutes (on a PPC Dual G4, to be fair). Answer nagging questions as required. Trashing the iTunes database loses a lot of important, but non-essential information like album art associations, window setup, etc. iTunes will spend a while churning through all your albums trying to download cover art, work out volume normalisation, etc.
  8. Sort out the applications you have downloaded. Look under "Applications" in iTunes' source list and you'll see that iTunes hasn't picked and of them up. Drag all the "*.ipa" files from your "iTunes/iPod Games" and "iTunes/Mobile Applications" directories into the Applications view. They'll magically appear. Purchased applications will copy over fine.
  9. Sort out your podcast subscriptions. Sadly, they've been lost, too. Despite some tutorials descriptions, I can't easily find a way resubscribe. You'll have imported a load of podcast mp3 files which have genre "Podcast" - you can see them in your library with a simple search. The corresponding podcast subscriptions have been lost. You can therefore see the podcasts you were subscribed to; their feed URLs is available in the "Get Info" iTunes dialogue box for each file. You'll have to resubscribe manually.
That's it. We're all done. You can now sync your iPods fine. Sync settings are NOT lost, thankfully.


Tuesday, 7 July 2009

Writing: Improve code by removing it

The July 2009 issue of ACCU's CVu magazine is about to land on doormats over the world. It contains my latest Professionalism in Programming column entitled: "Improve code by removing it".

It does what it says on the tin.

When I first wrote it, the article was really ropey. Then I trashed half of it, and it looked a lot better. I took out a couple more paragraphs and it was almost perfect. Then I took out the rest of it, leaving just a title and the author bio. Now it's marvelous.

(Becoming a) Git

I wanted to learn something new. I hadn't had much exposure to distributed version control systems. So took the plunge and installed git. Throwing caution to the wind, I relied on it immediately for critical project work. It was an interesting, and not entirely unpleasant experience.

I chose git for a couple of reasons:
  • people I knew had been using it, and gave me favourable reports
  • it has good svn (Subversion) integration

  • I know people who favour bzr (Bazaar). However, git seems the more powerful puppy, and the one that might teach me more overall.

    So far, I think that git is a very, very good tool. However, even though it's becoming more mature, it is not a friendly beast and not for the timid.

    Becoming distributed

    There are plenty of good articles floating around the net that describe the advantages of DVCS over the traditional centralised model. It makes a lot of sense. Even so, centralised version control isn't going anywhere soon.

    The main advantage of git for me is the svn integration. My project's repository is held on the other side of the Atlantic and I'm attached to it by a thin wet string, so access times to the repository are pitifully poor. Running something like git provides me with a local mirror so query operations are far faster, and I have the ability to make "local" checkins that are versioned but not yet pushed up to the central svn for public consumption.

    Both of these are neat tricks.


    I've installed git on Linux, MacOS and Windows. Naturally, the Linux install was the easiest. I pulled it in through Kubuntu's package manager, and everything worked swimmingly.

    The Windows port is interesting. Since Windows is not sufficiently Unix-like to run Git in any sane way, the nice Windows distributors ship with a minimal bash environment. For this old Unix-head it's a wonderfully useful thing, and saves me reaching for cygwin so much. It might be a bit of a bodge, but I like it.

    I'm doing most of my work on Mac OS at the moment. There are a few ways to get git on the mac, but the Git on MacOS installer project seems to most sensible (at least, at first glance). It works well enough, however, its still not running perfectly for me. The svn integration is bust. The git svn dcommit script doesn't complete correctly. After each subversion commit the tool needs to do a git svn rebase. However its internal script paths are incorrect and this always generates an error. So you have to manually git svn rebase to sort it all out. If you forget to do this all sorts of chaos ensue as repos get out of date, and not all of your changes propagate upstream.

    Using git

    Like most DVCS you certainly have to have some kind of idea what's going on before you dive straight into git usage. Git has a quite steep learning curve, and it's documentation is still not at the same level as other tools, no matter what you may hear from other people. Many problems must be resolved by Google searches rather than looking in the "git book".

    General git workflows are simple and pleasant, though. To this old Subversion-head, the idea of staging the changes you will make prior to checking them in at first seemed clunky. However, after a few commits I have really come to like the workflow. And the fact you can cherry pick individual parts of a file to commit is very neat indeed.

    The git stash is also a cute feature, allowing you to temporarily park the changes your working on (effectively in a short-lived temporary branch), to do something else, and then to reapply your changes once you're ready to come back to them.

    Tool support is a lot sparser than other version control systems. The command line distributions all ship with a Tcl/Tk application called gitk which is remarkable useful and powerful, albeit crap-to-look-at in an early 1980s stylee. On the mac there is gitx which is cute, but not quite as powerful as gitk.

    Complex merges seem harder to resolve in git than other systems, but this might entirely be because of a lack of understanding on my part. There is a lot more power under the hood, that for sure. But it makes harnessing it a struggle. But this is a feature: git was not designed for idiots.

    It's discomforting to come from a place where you know your version control tool inside out to a place of relative ignorance. But it's Good For You to make this jump every now and again. It puts hairs on your chest. (If you're female, you might not want to do this too often, then.)

    SVN integration

    Apart from the Mac install issue, I've mostly found using git as a local svn mirror to be remarkably effective.

    I have found, however, that its best to keep each git repository separate, each a clone of the main svn repo. I have a number of machines which I build the code on. I'd initially hoped to make one svn clone repo, then clone repos on the other machines based on that one git clone. The clones work OK, but pushing back the changes appears to cause all sorts of confusion and lead to some bogus git svn dcommits at the top of the git tree.

    This problem became so bad that I gave up on the idea of a fan-out repository structure, and just cloned each repo from svn individually on each machine. This seems lumpy, and does mean there's more trans-atlantic svn traffic than I'd like. I believe that Bazaar is much better in this respect, but would like to hear it confirmed by someone more knowledgable.

    The future is bright. The future is git.

    I've had a few hiccups along the way, but I'm happy enough to keep going with git. There are plenty of advanced use cases I've yet to encounter, and I dread and look forward to the pain in equal measure.

    Git is the C++ of version control systems

    My observation based on a few months of use is that git is the C++ of version control systems. This is ironic based on what Linus thinks of C++. However git is the powerful,-can-do-everything,-allows-you-to-shoot-yourself-in-the-foot-if-you-don't-know-enough-about-it version control system. People will be prejudiced against git because of its complexity. Some people will love it because of its complexity and power. Sometimes it's the best tool for the job, though.

    Git: it's good for you. Just like C++.

    Thursday, 2 July 2009

    Code Craft: Now available in Russian

    I've just been sent the new Russian translation of my software development book, Code Craft. Interesting reading, indeed. If you understand Russian, that is. I have no idea how good this translation is; maybe any eager Russian readers could tell me?

    As you can see, it's an interesting choice of cover for this translation. I wonder what it says about the Russian market? The Japanese and Chinese translations had bright and interesting covers, this is a rather somber affair.

    Code Craft has been out for about two years now and has proved very popular. Nevertheless, it's frustrating that Amazon is not capable of showing the correct cover image in their listing.

    Now perhaps it's about time I got to writing my second tome?

    C++: How to say "#warning" to the Visual Studio C++ compiler

    I encountered a piece of code that would no longer compile in a particular build variant of our product. I wanted to hack it out, and leave a compiler warning in it's place so we wouldn't lose track of the change.

    It's easy in gcc. You simply say:
    #warning FIXME: Code removed because...
    So that's me sorted for Linux and MacOS. I'm happy in the Fun Place.

    But in the Dark Place I was clueless. How do you say #warning to Visual Studio? You can happily write #warning in C#, but not C++.

    Interestingly, the answer fell below my Google/Boredom Threshold (i.e. a web search didn't reveal the answer in sufficient few clicks that I lost interest). I just shoved in a run-time assertion instead. It'd do the job, but not as immediately as I would have liked.

    Thanks to hashpling and the miracle that is Twitter, I now know the answer, and share it with you in the vein hope it might come higher up the Google rankings for those poor souls that follow me:
    #pragma message ("FIXME: Code removed because...")
    Needless to say, this is all tediously non-standard.

    For bonus points

    This still doesn't get us exactly the same behaviour as gcc's #warning. The message is produced, but without file and line information. This means that if you double-click the message in the VS IDE it will not jump to the warning in the editor window. It also means that build logs aren't much use.

    Sam Saariste pointed this out, and here's the standard preprocessor mumbo-jumbo you have to jump through to get the exact warning behaviour I was after:
    #define STRINGIZE_HELPER(x) #x
    #define WARNING(desc) message(__FILE__ "(" STRINGIZE(__LINE__) ") : Warning: " #desc)

    // usage:
    #pragma WARNING(FIXME: Code removed because...)
    Couldn't be simpler, could it?!

    Tuesday, 16 June 2009

    C++: #include <rules>

    My blood is boiling. I'm seething. I'm going to go mad. Is that an overstatement? Possibly. But it's not far off...

    You see, you can get so used to doing things the Right Way that when you stumble across someone doing it the Wrong Way it comes as quite a shock. And a frustration.

    Lately I've been working on a C++ project with appalling include file discipline. It's embarrassingly bad. There is a well-known gentleman's agreement over include files. A #include <etiquette> if you like. Doesn't everyone know about this?

    (Of course, many programmers will cite the fact that C and C++ require such "good practice", rather than ENFORCE it, as a weakness of the language. Perhaps they are correct, but that's a different story.)

    The most basic of the #include <rules> are:

    1. A header file must be self-contained and complete.

    #include-ing it CANNOT produce a build error.

    It does not require you to include more files first. Anything it references is either forward declared (if possible) or #included in that file.

    If this is not the case, the user has to jump through innumerable hoops to work out exactly where the undefined bits of code live, and so what other files must be included first. Naturally, when those files do not include cleanly the task recurses painfully.

    It's annoying. It's wrong. It makes extra work, reduces the code's self-understandability, and opens the door for errors (for example: the wrong files may be included; and it is possible to write header files that behave differently when different sets of include files are brought in beforehand).

    2. Include files internally prevent problems from multiple inclusion

    The canonical format for a header file is:

    // any required #includes go here

    // header file contents go here

    That "unique" name should be well chosen, and usually based on the name of the include file in question. It should also include the name of the project, and possibly the name of the subsystem, too, in order to avoid conflicts with header files you mnight be importing from a any third party libraries.

    Many compilers provide #pragma once as a helpful way to write the same thing. However, this is NOT a standard C or C++ feature. It's cute, but it means that your code is not portable. Does this matter to you? It really ought to. The best advice is to used #pragma once AND the standard include guards together.

    If you do not do this, then multiple includes of a header file will almost certainly generate build errors as the compiler sees re-definitions of the same code constructs.

    Of course, some (very few, comparatively) headers are designed for multiple inclusion (e.g. by defining types based on some preset #define value that you establish prior to the #include). You CAN still define include guards for these headers using a little preprocessor ## macro string gluing.

    Objective-C provides an interesting alternative to #include guards or #pragma once: the #import directive. This states at the include site that if the file has already been included, do not include it again. Otherwise go ahead and include it now.

    It's cute, but it is just plain wrong. The calling site is NOT the place to specify if a file should be included one time only. This is a part of the contract provided and required by the include file, and so should be stated and enforced there. Also, import and include can be freely interchanged on the same files. The user should not be able to break the contract by accidentally including rather than importing the header.

    Precompiled header files are another source of weirdness that I don't have the time to moan about properly here. They bring their own set of potential misuses.

    3. Define things in ONE FILE ONLY

    Do not have multiple #include files with different declarations of the same name.

    This is a violation of the ODR (One Definition Rule). If the files define specific variations that are needed for different build configurations or targets, then still put them all in the same file. DO NOT leave it up to the #include-r to work out which of the files to use themselves. Any silly #ifdef determination logic should be INTERNAL to the header file, not external at the include-site.

    It's obvious what will happen if you ignore this rule. Someone will accidentally include the wrong header version somewhere, and cause errors. Probably subtle, odd, and hard to track down runtime errors, too.

    4. Differentiate "public" interface header files from implementation files

    Many projects contain subcomponents with a small number of "public" header files defining their interface, and many internal .h files for implementation classes.

    Differentiate these files.

    This should be done by placing them in very different file locations. This will prevent a subcomponent interface user from accidentally including an internal header and using it as if it's part of the public interface.

    There are more good practice rules than those. But these are the most basic and important ones.

    The project I've been working on breaks these rules all over the place and it makes working with the code really, really complicated. Yes, I'm moaning. But we should know better than this by now. And we really should have developed sharper standard tools for this by now.

    The greater the size of the project, the harder these problems are to fix. I'd love to dive in and sort the whole thing out in the current project, but I fear it will take a very, very long time. It's particularly hard as it's a large codebase that builds on several platforms with a number of compilers.

    Tuesday, 9 June 2009

    Tech toys: Synergy

    I have four machines on the desk in front of me, each in active daily use. The upshot? A tangle of keyboards and mice on the work surface, and no space for my arms.

    But I found the solution. Perhaps I'm a bit late to join the party, but it's here: Synergy.

    Synergy is a wonderful application that shares one keyboard and mouse between many machines. It's as if you were using a multi-head display. But with many bodies, too. Synergy is relatively easy to install, although by default requires a little text file configuration. Nothing frightening for a techie, but a casual user might need a deep breath and a good run up.

    Performance is really good (it does slow down when one machine is heavily CPU loaded or performs a lot of network access, but this is relatively rare). The application works on Linux, Mac OS, and Windows. I run the server on Linux, with two mac clients and a Windows client. The code hasn't been updated in about two years, but it seems every bit as good as it must have been two years ago!

    And now? I have one keyboard. One mouse. Lots of free desk space. And deep joy.


    There are several Mac OS GUI front ends that might be useful floating around the ether. I've not used any of them, so the many or many not be any good...

      Thursday, 14 May 2009

      Writing: A case for code reuse

      The May issue of ACCU's CVu magazine is hitting doormats right now, and contains my latest Professionalism in Programming column, called "A case for code reuse". I present a number of "re-use" cases (pun intended), and show a real use for your old code - as a tool to help you write better code in the future.

      This is another excellent issue of CVu. Edited by Steve Love (don't ask how many beers contributed to his picture on the editorial page), there are some great articles on subjects including: distributed version control, job hunting, and the D programming language (this latter one contributed by Andrei Alexandrescu).

      Friday, 8 May 2009

      Const-correctness for documents

      Const correctness is the art of using the type system of a static programming language (conventionally C or C++) to ensure that immutable objects are never modified, and mutable ones... can be.

      It's a real boon to ensure program correctness. If you don't want your data fiddled with then you declare it (or any references you give to it) const, so that users of said data cannot change it. Note that you can give away a const reference to mutable data - in this case the user cannot change it, but the you may do so behind their back.

      And now, in the Real World...

      I'm getting fed up with the sloppy business practice of sending around copies of Microsoft Word documents as email attachments. Everyone does it. So it must be OK. Right?

      No. It's cobblers. For a start, I don't have a copy of Word. I know that makes me rare, but there are a few freaks like me out there. Although Open Office is good, it's never perfect at rendering complex word documents.

      However, I'm more worried about the distribution of a mutable document (the Word document itself). Sending a .doc to people who can modify it in ways that are not easily traceable, and then send it onwards is not ideal. If the document proudly proclaims that you are the author, then this should scare you rigid. Some muppet in marketing could change your text beyond all recognition - make it factually incorrect, but not change the author line - and then publish it to the world, making you look a total plonker in the process.

      Ensure the const-correctness of your documents.

      When you distribute any document that should not be modified by others, send around a PDF copy. OK, PDFs can be annotated, but those annotations are clear and cannot easily be mistaken for part of the original document. Of course, some devious idiot can make a convincing variant, but then a devious maintenance programmer could always const_cast your data's immutability away.

      PDFs are the mainstay of const-correct document control.

      Bonus points: yes, this is not entirely the same as my programming language analogue. In C or C++ a user could make their own copy of your const data and modify it themselves. Fair game. You can also give away a const reference to some data, but still change that data yourself. How would you do these things in the document? Perhaps consider a URL as your document reference?

      Legacy Code: Learning to live with it

      The slides from my ACCU 2009 conference presentation, Legacy Code: Learning to live with it, are now available from the conference website, here.

      Click on {Slides} link to download a PDF copy. In the comfort of your own home, you too can enjoy theology, underwear, frogs, a shopping list or two, and a whole pile of dodgy code.

      Monday, 27 April 2009

      ACCU 2009: The aftermath

      The ACCU conference is without doubt, the technical highlight of my year. A chance to put down the tools for a while, and spend time with other like-minded developers who care about crafting great code.

      ACCU 2009 was last week. I'm now recovering! Every year the conference induces brain overload, and sleep deprivation. I never fail to learn exciting new stuff, to be encouraged to think new thoughts, and to meet interesting fellow developers.

      I'm not going to provide an enormous writeup of this year's event here. There are many people who have done such a thing already in their blogs. If you didn't go, then no doubt they'll make very interesting reading.

      This year, I learnt that going to bed before 4am does help with the delivery of your session the next day! Perhaps I was not as animated as usual, but I trust my session on Living with Legacy Code was useful.

      It's a great technical conference, but it has a wonderful social aspect, too. This year spaces drunk tabs under the table (well, around it, at least) to settle an old score that a game of squash simple couldn't answer: which is God's One True Way To Format. And over the course of the event much money was raised for Bletchly Park, a suitable and very worthy cause.

      My thanks go to the dedicated team of conference organisers and administrators, to every speaker, and to the delegates who made the event another incredible success. If you missed it this year, I highly recommend booking a place at next year's conference. I have no idea what the programme will look like, but I already know it'll be good.

      Tuesday, 21 April 2009

      Subversion, KDiff3, and Cygwin

      Recently I've been doing work in a Windows environment, which is a bit of a culture shock for this Linux/Mac weenie.

      As ever, I installed cygwin early on to make my life bearable. I'm not sure how I'd navigate without vim, grep and ctags. Are there actually any IDEs with genuinely useful code navigation?.

      To make subversion usage the mirror of my Linux setup, I installed the excellent Windows port of kdiff3. My usual trick is to set a simple svn alias that involves kdiff3 when I need it. Something like:
      alias sd="svn diff --diff-cmd kdiff3 -x ' -qall '"
      (The nasty -x parameter is a workaround for some problematic kdiff3 invocations on Linux.)

      However, the same trick did not always work under cygwin. From time to time, kdiff3 would complain that it could not find a file.

      It turns out that this is a subtle problem where cygwin sometimes tries to convert filenames to DOS format before passing them to kdiff3.exe. When it does this, it fails and creates hybrid half-DOS/half-unix filenames. Handy.

      The trick to make it work is to create a little kdiff3 wrapper script and use that, rather than kdiff3 directly. The magic rune you need to incant in the script is cygpath.


      LEFT=`cygpath -d ${6}`
      RIGHT=`cygpath -d ${7}`
      /cygdrive/c/Program\ Files/KDiff3/kdiff3.exe $LEFT $RIGHT -L1 "$LEFT_NAME" -L2 "$RIGHT_NAME"

      Monday, 20 April 2009

      Speaking: Journeys (The loss of a child)

      A few weeks ago I spoke at Cambridge Community Church with my wife in a talk entitled Journeys. We spoke about a particularly hard time in our life: seven years ago when we lost our beautiful 12 week old baby girl in a car crash.

      Perhaps this is a slightly unusual (and somber) topic for this blog, but if you're interested in our deeply personal and unashamedly Christian viewpoint on the loss of a child, and the importance of our faith, the support of friends, and the church family then you can hear the pair of us speak here. We begin speaking about 24 minutes in.

      In the talk we mentioned the song I played at Jess' thanksgiving service, and a few people have subsequently asked about it. If you'd like to listen, there's a rough version of it available to listen to on MySpace here.