Building a new kernel for the Nexus Player

I bought a cheap Micro USB (OTG) to USB hub with built-in Ethernet from Ebay for my Nexus Player. It seemed like the perfect way to make use of the single USB port available.


Once I got it, I realised the Ethernet used a Davicom chipset and while it did have good Linux support with the dm9601 module, it wasn’t enabled in Android kernels.

As I started looking at guides for compiling kernels for Android, I found that they didn’t quite work properly for the Nexus Player.

Most Android devices are ARM based, but as the arch of the Nexus Player is x86, there’s some minor differences in some of the steps.

Here’s a quick run-down of the steps I did to simply add a new module to the Nexus Player kernel. I’m assuming that you’ve read a few of the more detailed guides or you’ve done some kernel building before.

NOTE: You’ll need to go into OEM unlock on the device. You probably don’t need to be rooted, but I was. YMMV.

First step is to find the kernel version you’re currently running. Connect to the device with adb and run this in the shell:

shell@fugu:/ $ cat /proc/version

We’ll need to get the git commit for the kernel.

In this case:

Linux version 3.10.20-g912890c

The kernel commit is the part after the ‘g’, so ‘912890c‘.

Depending on your Linux distro, you may want to pull Google’s toolchain. I’m using Arch Linux, which has GCC 5.3 and I had a build error, so instead, I just pulled the same toolchain that Google used for their production builds. If you want to match it, just look for the GCC version from the output above. E.g. ‘gcc version 4.8

In my case, I cloned the repository:

git clone

Add this to your path (substitute the $HOME/android part for your path)

export PATH="$PATH:$HOME/android/x86_64-linux-android-4.8/bin"

Add this CROSS_COMPILE variable to instruct the build to use this new toolchain. This is the prefix of the GCC binaries in the bin directory

export CROSS_COMPILE="x86_64-linux-android-"

Now we’ll clone the Kernel repository. For the Nexus Player (fugu) we’re using:

$ git clone fugu-kernel
$ cd fugu-kernel

We’ll create our own branch to work on, based on the last commit of our current kernel

git checkout -b my-fugu-kernel 912890c

Let’s modify the kernel now

$ export ARCH=x86
$ make fugu_defconfig
$ make menuconfig

We can now make changes to our config.

Once you’re happy, we’ll build it using all our cores:

$ make -j$(nproc)

Once you’re done, you’ll have a kernel image at arch/x86_64/boot/bzImage

Now we’ll update the boot image to include our new kernel.

You’ll want to install the abootimg tool, and get a copy of the boot.img. Best to find it from the Nexus Player factory image, then tar and unzip.

We’ll update our factory boot.img and include our new kernel only:

$ abootimg -u boot.img -k (kernel path)/arch/x86_64/boot/bzImage 
reading kernel from (kernel path)/arch/x86_64/boot/bzImage
Writing Boot Image boot.img

Boot the Nexus Player into fastboot mode, and we’ll test our new kernel (before flashing)

$ fastboot boot boot.img

If you’re happy with it, then don’t forget to flash it:

$ fastboot flash boot boot.img


For more information, I found this page really useful:


Posted in Personal at December 19th, 2015. 6 Comments.

P2127 code on a Focus XR5/Focus ST

I had an issue with my 2007 Ford Focus XR5 (aka Focus ST) recently.

I started getting a ‘Steering Assist Failure‘ message appear occasionally when starting my car. Usually I could just turn if off, leave it for a little while then start again and it would be OK, and the car would be behave normally.

Shortly after I got an ‘Engine Systems Fault‘ message and the car entered ‘Low Acceleration Mode‘ to protect itself. Using my OBD-II adapter and the ‘Torque’ Android app, I found the specific error to be:

P2127 – Throttle/Pedal Position Sensor/Switch ‘E’ Circuit Low

Using the magic powers of Google, I found this:

“Finally the P2127 error code was sorted.

When the car is started in the morning the voltage drops below 9v.

9v is the threshold voltage for the all the ECU sensors, first sensor on my car that then schemes there’s k@k in the land is the throttle position sensor.”

Using my very cheap multimeter, I tested the load across the battery. When I turned the key, sure enough, the voltage dropped from 12.43V to 8.3V, sending the throttle position sensor into an error state. Testing the same thing on Bek’s car showed it dropped to about 10.5V which is much healthier.

Once I installed a new battery, everything worked perfectly.

Hopefully this might help someone else, although I’m sure a different type of car would exhibit different symptoms.

Posted in Personal at March 28th, 2014. 5 Comments.

ZFS on Linux

ZFS is a fantastic filesystem developed by Sun. Compared to other filesystems, it’s quite interesting as it combines both a filesystem and a logical volume manager. This allows you to get great flexibility, features and performance. It supports things like integrated snapshots, native NFSv4 ACL support and clever data integrity checking.

I’m now running a HP ProLiant MicroServer N36L which is a small NAS unit containing a 4-bay SATA enclosure. It has a low-performance AMD CPU, and comes with 1GB RAM and a 250GB harddisk. I’ve upgraded mine to 4GB of RAM and 4 x 2TB Seagate Barracuda drives.

The benefit of these units are that they’re a standard x86 machine allowing you to easily install any OS you like. They’re also really cheap and often have cash-back promotions.

I bought mine when I was in the UK and I brought it back with me to Australia. I waited until I got back to upgrade it so save me the trouble of shipping the extra harddisks on the ship.

In this post, I’ll document how to easily install ZFS on Debian Wheezy and some basic ZFS commands you’ll need to get started.


UPDATE: ZFS on Linux now has their own Debian Wheezy repository!

Install the ZFS packages

# apt-get install debian-zfs

This should use DKMS to build some new modules specific to your running kernel and install all the required packages.

Pull the new module into the kernel
# modprobe zfs

If all went well, you should see that spl and zfs have been loaded into the kernel.


Prepare disks

ZFS works best if you give it full access to your disks. I’m not going to run ZFS on my root filesystem, so this makes things much simpler.

Find our ZFS disks. We use the disk ID’s instead of the standard /dev/sdX naming because it’s more stable.
# ls /dev/disk/by-id/ata-*
lrwxrwxrwx 1 root root 9 Jan 21 19:18 /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E1GYH5 -> ../../sdd
lrwxrwxrwx 1 root root 9 Jan 21 08:55 /dev/disk/by-id/ata-ST2000DM001-9YN164_Z1E2ACRM -> ../../sda
lrwxrwxrwx 1 root root 9 Jan 21 08:55 /dev/disk/by-id/ata-ST2000DM001-9YN164_Z1F1SHN4 -> ../../sdb

Create partition tables on the disks so we can use them in a zpool:
# parted /dev/disk/by-id/ata-ST2000DM001-9YN164_Z1E2ACRM mklabel gpt
# parted /dev/disk/by-id/ata-ST2000DM001-9YN164_Z1F1SHN4 mklabel gpt
# parted /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E1GYH5 mklabel gpt


Create a new pool

ZFS uses the concept of pools in a similar way to how LVM would handle volume groups.

Create a pool called mypool, with the initial member being a RAIDZ composed of the remaining three drives.
# zpool create -m none -o ashift=12 mypool raidz /dev/disk/by-id/ata-ST2000DM001-1CH164_Z1E1GYH5/dev/disk/by-id/ata-ST2000DM001-9YN164_Z1E2ACRM/dev/disk/by-id/ata-ST2000DM001-9YN164_Z1F1SHN4

RAIDZ is a little like RAID-5. I’m using RAID-Z1, meaning that from a 3-disk pool, I can lose one disk while maintaining the data access.

NOTE: Unlike RAID, once you build your RAIDZ, you cannot add new individual disks. It’s a long story.

The -m none means that we don’t want to specify a mount point for this pool yet.

The -o ashift=12 forces ZFS to use 4K sectors instead of 512 byte sectors. Many new drives use 4K sectors, but lie to the OS about it for ‘compatability’ reasons. My first ZFS filesystem used the 512-byte sectors in the beginning, and I had shocking performance (~10Mb/s write).

See for more information about it.

# zpool list
mypool 5.44T 1.26T 4.18T 23% 1.00x ONLINE -

Disable atime for a small I/O boost
# zfs set atime=off mypool

Deduplication is probably not worth the CPU overheard on my NAS.
# zfs set dedup=off mypool

Our pool is now ready for use.


Create some filesystems

Create our documents filesystem, mount and share it by NFS
# zfs create mypool/documents
# zfs set mountpoint=/mnt/documents mypool/documents
# zfs set sharenfs=on mypool/documents

Create our photos filesystem, mount and share it by NFS
# zfs create mypool/photos
# zfs set mountpoint=/mnt/photos mypool/photos
# zfs set sharenfs=on mypool/photos

Photos are important, so keep two copies of them around
# zfs set copies=2 mypool/photos

Documents are really important, so we’ll keep three copies of them on disk
# zfs set copies=3 mypool/documents

Documents are mostly text, so we’ll compress them.
# zfs set compression=on mypool/documents


ZFS pools should be scrubbed at least once a week. It helps balance the data across the disks in your pool and to fix up any data integrity errors it might find.
# zpool scrub <pool>

To do automatic scrubbing once a week, set the following line in your root crontab
# crontab -e
30 19 * * 5 zpool scrub <pool>

Coming soon is a follow-up to this post with some disk fail/recovery steps.

Posted in Linux at October 7th, 2013. 9 Comments.

Puppet filebucketing fail with NFS

I’ve got back to Australia and I’m continuing my UK job from home.

So yesterday, I was doing some cleaning up and needed to unmount an NFS share and clean up its mount point directory.

You can see from the Puppet code below that I marked both resources as ‘absent’ to clean them up.

file { '/tmp/install':
    ensure   => 'absent'
mount { '/tmp/install':
    ensure   => 'absent',
    device   => nfs-server:/install,
    fstype   => nfs

This triggered Puppet to start filebucketing everything it could from the NFS share and subsequently filling up the root filesystem. I managed to revert my commit fairly quickly, but a large number of hosts in our infrastructure had already picked this up. This included both development and production systems.

There is an existing Puppet bug report about the issue at

Apart from the obvious mistake that I should have just run this in a test environment first, this was totally unexpected behaviour.

A couple of things you could do to prevent this happening to you:

  1. Disable filebucketing either globally, or just for this file resource
  2. Don’t try to remove the NFS mount and directory at the same time

Hope this helps.

Posted in Personal at September 27th, 2012. No Comments.

AFL plugins for XBMC

UPDATE: Plugins now updated and working for the 2013 season!

Whilst being in London, it’s been hard to get my AFL fix. So to keep up with what’s going on, I’ve created two new XBMC plugins: AFL Video and AFL Radio.

AFL Video

You can browse all the latest videos from the AFL web site, including match replays, interviews and highlights.

AFL Video plugin

You have a bunch of channels to choose from, including a team channel. The team channel will list the videos specific to your club.

Cats TV

Match replays are usually available 12-24 hours after the match has been played.

AFL Radio

Unfortunately, you can’t watch the games live without some sort of paid subscription – so the radio streams are the next best thing.

AFL Radio

Just choose the stream you want and away it goes. I’m not sure how stable this will be long term due to how the stream works, but so far so good.

Interestingly, most of the streams work outside of the AFL game calls, but most of the streams are only 64k WMA, so the bit rate is a little low.



You can grab the latest ZIP files from the Github project download pages for AFL Video and AFL Radio. You can then choose the ZIP files from the XBMC Addon install from Zip file menu option.

These will also be included in the AU CatchUp TV XBMC repository too.



For any issues, please file a bug at issue tracker and please include a copy of your XBMC log file.

Posted in Personal at April 8th, 2012. 90 Comments.

AFL streaming radio from Linux

This is a big sarcastic thanks to AFL and Telstra for building the AFL web site in such a way that it only really works properly in Windows.

Being in London, I want to listen to the Geelong games over the streaming radio, but in Linux (and probably Mac), Silverlight just won’t cut it – and the radio fails to load with an error.

I did some digging around, and worked out the URL for the the streaming radio, which you can then plug into MPlayer to obtain the ASX stream:

mplayer -user-agent "NSPlayer/11.08.0005.0000"

The code on the end is the stream ID. These are the station codes I’ve managed to work out:

  • ABC774: 2
  • 5AA Adelaide: 3
  • 6PR Perth: 4
  • 3AW Melbourne: 5
  • National Indigenous Radio Service: 6
  • Gold FM Gold Coast: 7
  • Triple M Sydney: 11
  • Triple M Melbourne: 12
  • Triple M Brisbane: 13
  • Triple M Adelaide: 14
  • K-Rock Geelong: 15

I hope this proves useful to someone else.

UPDATE: This has now been changed for the 2013 season. If you’re interested in listening to AFL radio on Linux/Mac/Windows, then try my XBMC AFL Radio plugin.

Posted in Personal at April 2nd, 2011. 19 Comments.

Using pkgutil on Solaris with Puppet for easy package management

I’ve been using Puppet on Linux systems for some time now, but I’ve only just started using it in a Solaris environment.

I think one of the killer functions of Puppet is being able to easily install packages and manage services on a system. Most Linux distros these days have tools for working with repositories of packages, like Yum on Fedora/RedHat/CentOS and Apt on Debian and Ubuntu. These work really well with Puppet, because you can easily script a class which requires a specific package, and Puppet will just call the package tool and it’ll install the right package and all of the required dependencies.

Using Solaris feels like a step back from Linux, not having an official repository tool like Yum and Apt. Its package system seems quite primitive which can suffer from the dependency hell that we used to have with RPM before it was wrapped up with Yum. Enter: pkgutil.

Pkgutil is like Yum for Solaris, written in Perl by Peter Bonivart. It was designed for OpenCSW, which is a repository for Open Source packages on Solaris – and also the best place to install Puppet from. With a few simple steps, you can actually build an OpenCSW compatible repository of Solaris packages and tell pkgutil to use it, rather than the standard OpenCSW one.

Puppet has almost gained a proper package provider for Pkgutil (See Puppet issue #4258: Add pkgutil provider), which should be available in Puppet 2.6.4 maybe. In the mean time, we can just install it into our Ruby path to make use of it right now.

Steps involved are:

  • Install pkgutil
  • Install Puppet on Solaris
  • Install the pkgutil provider
  • Build an OpenCSW-compatible repository of your own packages
  • Define pkgutil as a provder in your Puppet configuration
  • Install some packages!

Install pkgutil

Before we do anything, we should install pkgutil. This handy one-liner will install it for Solaris 10 and OpenSolaris.

# pkgadd -d`uname -p`.pkg

For Solaris 8 and 9, take a look at the pkgutil installation page for more details.

Install Puppet

Now that pkgutil is installed, installing Puppet is a breeze!

# /opt/csw/bin/pkgutil --install puppet

This will resolve all the dependencies and install everything just like the Linux package management tools do.

Install the pkgutil provider

I’m using a version of pkgutil from Dominic Cleal’s git repository.

# wget --no-check-certificate -O /opt/csw/lib/ruby/site_ruby/1.8/puppet/provider/package/pkgutil.rb

This wget will download it, and copy into the right place in the filesystem for Puppet to pick it up.

Build an OpenCSW-compatible repository

As part of OpenCSW, Peter Bonivart has released a tool for creating OpenCSW repositories, called bldcat. You can find it as part of the pkgutilplus package from OpenCSW.

Create yourself a new directory for your packages on your webserver. For me, I needed OpenSolaris 2009.06 and Solaris 10 support, so:

# mkdir -p repo/solaris/i386/5.11/
# mkdir -p repo/solaris/i386/5.10/

Then just put all your packages into that directory, and run bldcat:

# bldcat .

This will generate the catalog, and descriptions file needed for pkgutil. Once you make this directory available by HTTP, you can add the URL into the pkgutil.conf file.

One thing to remember is that you’ll need to do this on a Solaris machine. Although bldcat will work on Linux, it requires some of the Solaris package tools, which won’t be available. For me, I just did it NFS mounted from a Linux server.

Now, set the mirror and noncsw entries like this:


For my situation, I had to include a few packages that we provided as our standard environment, and the package names weren’t prefixed with CSW, to the ‘noncsw’ option needs to be set.

Because I wanted a mix of OpenCSW packages and our corporate standard packages, I copied in the OpenCSW packages (and dependencies) along with the corporate ones into the one repository. You can put Puppet in there also.

NOTE: All your packages need to be *.pkg.gz format, so make sure you compress any packages that aren’t already gzipped!

Define pkgutil as a provider in your Puppet configuration

In the site.pp file on my Puppet Master, I include this definition:

Package {
    provider => $operatingsystem ? {
        redhat => yum,
        centos => yum,
        sles => zypper,
        solaris => pkgutil,

To see this in action, I’ve used Nagios’s NRPE as an example.

package { nrpe_package:
  name => $operatingsystem ? {
    Solaris => 'CSWnrpe'
    CentOS  => 'nrpe',
    SLES    => 'nagios-nrpe',
    Debian  => 'nagios-nrpe-server',
  ensure => installed,

So with pkgutil, installing packages on Solaris can be as easy as Linux with Puppet.

Posted in Geek at December 10th, 2010. 6 Comments.

New Tram Hunter web site

I’ve been slowly doing some bits and pieces for a new Tram Hunter web site. I would now like to announce the new site at

Since v0.5 of Tram Hunter, we’ve included an option to send anonymous usage statistics to a server I have running on Google’s App Engine. My main aim was to generate some heat maps, based on the location of tram stop requests.

You can now see the final version of the heap map, which is generated nightly, from the latest 1000 requests. It turns out to be quite interesting to look at.

I’m also using the Google Chart API to generate some nice pie charts showing some other info like handset model, Android version and mobile networks.

In other Tram Hunter news, the latest stats from the Android Market show 5687 total installs, with 4293 active installs (75%). We also have a 4.85 rating out of 5, with 255 comments. The comments are all really positive, so it definitely makes development worthwhile.

I’ve created a new Twitter account for Tram Hunter, so for the latest updates, follow @tram_hunter.

Posted in Personal at October 9th, 2010. 1 Comment.

Tram Hunter: the blog post

I think this post has been a long time in the making, but I thought it might be time to share this little story.

Tram Hunter is a project I started nearly 2 years ago. It’s an Android client to the Yarra Trams TramTracker web service, which their iPhone client leaverages to provide real-time tram arrival information to users of trams in Melbourne.

I’m not sure what it is about Trams, but I’m almost enchanted by them. They’re slow, many are really old and usually it’s a pretty rough ride, but they also have much more character than buses and trains.

A friend and I made a mashup of Google Maps with tram stops once, and using timetable information, we plotted approximated locations of trams along a line. The trams even moved along the line, although it wasn’t really realistic, it was fun to watch. I spoke to Yarra Trams about what we had done, and I was invited to come and see the Operations Centre in South Melbourne, which was quite interesting. They offered me a job working with their development team on some .NET/Windows web services stuff (which turned out to be the TramTracker service), but I just couldn’t leave VPAC at the time.

Tram Hunter Stop Details

Real Time Departures

Tram Hunter Menu

Application Menu

So once Android was finally released, I bought their ADP1 development phone as quickly as I could. It cost a fortune, as the Australian dollar was quite weak at the time, but was pretty exciting. The idea of an Open Source phone to finally kick start some innovation in the mobile industry really appealed to me. I started messing with the Android API soon after.

I started working on Tram Hunter but got a bit stuck. I ended up shelving the project because I couldn’t figure out a problem I had, and moved on to other projects. It wasn’t until later (and I had moved to London), I was speaking to a friend of mine who was doing some Android development and he offered to help with the project. I proceeded to clean up the code, so it was in a compile-able state for someone else to look at. Somehow I managed to solve the issue and get something working. Everything seemed to just fall into place, and I had a working first version done.

I came across another project by accident by a couple of guys looking to do the same thing. I emailed them, and suddenly we had three developers and another joined soon after. I opened a Google Code project, put all our stuff into SVN and released version 0.1 to the Android Market. I later started a Google Groups mailing list for the project also.

The Tram Tracker iPhone application is slow and takes many taps to get to the information you want. Their interface has been designed to mimic the information screens at tram stops which is a nice idea, but actually provides an irritating user experience.

In comparison, the goal of Tram Hunter is to bring as many useful features as we can, without compromising the interface. I wanted to provide users the ability to get the information they want, with the least amount of clicks.

By using all the standard Android UI features, we gain a lot without needing to write a lot of code. Google Maps, location information by GPS, Network and Wifi, UI and search are all provided in the API so we don’t need to write this stuff ourselves. It also means it’s fast and simple.

Since the first version, we’ve introduced a few new features and have been fixing bugs. We’re on version 0.5 right now, and there’ll be a new one just around the corner.

The latest stats from the Android Market show 4325 total installs, with 3128 active installs (72%). Not bad considering the slow uptake of Android in Melbourne, and the limited number of tram users in Melbourne.

In version 0.4 of Tram Hunter, I introduced some code which (when only specifically enabled by the user) would send some usage information to a Google App Engine site I have set up. Tram Hunter will provide information about the user’s handset and Tram Hunter settings (e.g. What device is being used, what version of Tram Hunter is installed, which mobile network are we using, etc). It will also send information about which stops a user is requesting, and their location when they make the request.

Melbourne Heat Map

I’m currently in the process of generating heat maps, to indicate the location of Tram Hunter requests. Unfortunately, the code isn’t finished so I can’t release them out in the open yet. I have some Google App Engine bit to sort out first, but I’ll be releasing all the interesting statistics to the Android community.

UPDATE: The heat map is now running well on App Engine. The totally new Tram Hunter web site is now up and running with lots of cool graphs and stuf.

What’s next?

For Tram Hunter, I’m still taking feature requests and bug reports at our issue tracker, but I think development of this is starting to slow down.

I have been throwing around the possibility of porting it to Maemo/Meego to support the Nokia N900 (although something similar already exists) and possibly to BlackBerry devices. BlackBerry also uses Java, so it should be quite easy to reuse a lot of code.

I’m also looking into developing another application for timetable information. I have had many requests for an app for buses and trains, so I’m looking to leaveraging some Google Transit code and proving users with an ability to download specially formatted timetables to their handset and use many of the features of Tram Hunter, but in an offline fashion. The idea is that it’ll be generic enough that it can be used for any type of timetable information for anywhere in the world, as long as people are willing to help port the timetable information.

Posted in Geek at September 14th, 2010. 5 Comments.

Using the Yubikey for two-factor authentication on Linux

The Yubikey is a nice little device. It’s quite simple in design and operation. Yubikey

The key actually emulating a USB keyboard, which makes it instantly usable on any modern OS. You just press the button on the key to generate a one-time-password (OTP) to validate you. The method works by typing in your password, but before hitting the return key, you press the Yubikey button to finish it off. At the end of the OTP generation, it sends a carriage return itself.

The OTP is then sent to a validation server, either hosted by Yubico themselves, or you can host your own.

I’m going to walk through how you can set the infrastructre for doing two-factor authentication on Debian. In my specific case, the requirement was two-factor with an Active Directory username/password combination and the Yubikey as the second factor.

Unfortunately, the documentation from Yubico is quite average. To top it off, they insist on using multiple Google Code project sites for hosting their software.

This would normally be fine, but in this case, they have a Google Code project for every single little piece of code. Much of the documentation I found relates to older projects which are not supported by Yubico. This makes working out exactly what you need difficult. Within the Google Code project sites, documentation often runs in circles between projects.

In this document, I’ll look at using PAM to auth again the Yubico auth servers first. Once that’s working, I’ll move onto flashing the Yubikey with a new key and using our own Validation System.

NOTE: This is just some rough notes I put together. You should definitely read the Yubico documentation for this to really make sense.

Authenticating with the Yubikey with PAM

Get some dependencies

apt-get install libpam-dev libcurl4-openssl-dev libpam-radius-auth

Make ourselves a source directory

mkdir ~/yubikey; cd ~/yubikey

Get the current tarball of libyubikey, and install it

tar xf libyubikey-1.5.tar.gz
cd libyubikey-1.5
make check install

Get the current tarball of the Yubico C client, and install it

tar -xf ykclient-2.3.tar.gz
cd ykclient-2.3
make install

Get the current tarball of the Yubico PAM module, and install it

tar -xf pam_yubico-2.3.tar.gz
cd pam_yubico-2.3
make install

You should end up with your Yubico PAM module ‘/usr/local/lib/security/’

We’ll refer to this in our PAM config /etc/pam.d/openvpn

# /etc/pam.d/openvpn - OpenVPN pam configuiration
# We fall back to the system default in /etc/pam.d/common-*
auth required /usr/local/lib/security/ id=1 debug authfile=/etc/yubikeyid
auth required no_warn try_first_pass
@include common-account
@include common-password
@include common-session

This configuration will tell PAM to hit the Yubico module first. This splits apart your password field into your password and OTP. The OTP is validated against the Validation Servers, and the password is then passed onto the next module. This configuration will use the Yubico auth servers to check your token.

Once you have a working config, we’ll move to setting up our own Validation Servers. We’ll need to specify the URL for that in this config later on.

In that case, we’re also using RADIUS. This could be LDAP if you had an LDAP server available. You should be able to use the standard UNIX credentials (/etc/password, /etc/shadow) also.

The other important piece to note here is the authfile, /etc/yubikeyid

This file lists the mapping between username and the fixed part of your Yubikey. This is the first 12 chars of the Yubikey OTP (e.g. when you press the button)


FreeRADIUS authenticating against Active Directory 2008.

I banged my head against a wall for a while on this one. The trick is that you need at least FreeRADIUS 2.1.6 for AD authentication to work properly.

Add Debian backports to your /etc/apt/sources.list

deb lenny-backports main contrib non-free

Import the backports key

wget -O - | apt-key add -

Update and install the new freeradius

apt-get update
apt-get -t lenny-backports install freeradius freeradius-ldap

In your radiusd.conf

ldap {
    # Define the LDAP server and the base domain name
    server = ""
    basedn = "dc=ad, dc=yourcompany, dc=com"

    # Active Directory doesn't allow for Anonymous Binding
    identity = ""
    password = password

    password_attribute = "userPassword"
    filter = "(&(sAMAccountname=%{Stripped-User-Name:-%{User-Name}})(memberOf=CN=Users,DC=ad,DC=yourcompany,DC=com))"

    # This fixes Active Directory 2008 access
    chase_referrals = yes
    rebind = yes

    # The following are RADIUS defaults
    start_tls = no
    dictionary_mapping = ${raddbdir}/ldap.attrmap
    ldap_connections_number = 5
    timeout = 4
    timelimit = 3
    net_timeout = 1

In our FreeRADIUS client file /etc/freeradius/clients.conf:

client localhost {
    ipaddr =
    secret = testing123
    nastype = other

Use radtest to test our RADIUS is authenticating properly

radtest <username> <password> localhost 1 testing123

Should return Accept.

Set the address and shared secret of the radius server in /etc/pam_radius_auth.conf. The password of testing123 was defined in our RADIUS client config.

# server[:port] shared_secret   timeout (s)       testing123      1

OpenVPN has an issue with PAM loading the Yubikey module, so we have to LD_PRELOAD the pam module before starting OpenVPN.

export LD_PRELOAD=/lib/; openvpn --config openvpn.conf

For a permanent fix, at the end of the start_vpn function in /etc/init.d/openvpn, just before the $DAEMON line:

    export LD_PRELOAD=/lib/
    $DAEMON $OPTARGS --writepid /var/run/openvpn.$ \
        --config $CONFIG_DIR/$NAME.conf || STATUS=1

Change the path of /lib/ to suit your own system.

I won’t go into the OpenVPN configuration, except that for PAM authentication you need these options in your server config:

plugin /usr/lib/openvpn/ openvpn
ns-cert-type server

Personalising your Yubikey

To host your own Yubikey validation system, you require the secret AES key of your Yubikey. In the past, Yubico could provide this to you. Now, you’re required to flash your Yubikey yourself which will generate a new AES key.

Yubico provide a personalisation tool for Linux, Mac and Windows. If you’re on Windows, you get a nice little GUI. For Linux and Mac, you have a CLI based tool. It’s worth having a look at the ‘Personalization Tool’ page at:

Installing the Personalisation Tool

Install some dependencies:

apt-get install libusb-1.0.0-dev

Grab the latest Pesonalisation Tool tarball from:

cd ~/yubikey

Extract, build and install libyubikey

tar xf libyubikey-1.5.tar.gz
cd libyubikey-1.5
make install

You’ll need to provide a UID value for flashing your Yubikey. It needs to be 6 characters, and in hexadecimal. You can use this command to generate one for you.

dd if=/dev/urandom of=/dev/stdout count=100 2>/dev/null | xargs -0 modhex | cut -c 1-10 | awk '{print "vv" $1}'

You must provide the public name (fixed) parameter in modhex format. The modhex format is a special encoding used to ensure characters sent by the key are always correctly interpreted whatever keyboard layout you use.

You also need to generate yourself a public name for your key. This is known as the ‘fixed’ part, and it’ll be the first 16 chars when you generate your OTP. This will identify your key from anybody else’s.

dd if=/dev/urandom of=/dev/stdout count=100 2>/dev/null | xargs -0 modhex | cut -c 1-10 | awk '{print "vv" $1}'

This comamnd generate some random text, does a modhex operation, grabs the first 10 chars, then adds ‘vv’ to the front to make it up to 12.

You’ll be prompted for a passphrase on your AES key. I leave mine blank, but if you do set one, don’t ever lose it. I believe it’ll stop you from re-personalising your Yubikey.

ykpersonalize -ouid=74657374696e -ofixed=vvcnrdkvevtj
Firmware version 2.1.2 Touch level 1793 Program sequence 1
Passphrase to create AES key:
Configuration data to be written to key configuration 1:
fixed: m:vvcnrdkvevtj
uid: h:74657374696e
key: h:fcaad309a20ne1809c2db2f7f0e8d6ea
acc_code: h:000000000000
ticket_flags: APPEND_CR

Commit? (y/n) [n]: y

Save this information, as we’ll need it later.

Setting up yor own YubiKey OTP Validation Server

You need to install two things: The Key Storage Module and the Yubico Validation Server. The Key Storage Module (KSM) holds the secret AES key of your Yubikey token, while the Validation Server does the OTP check against the KSM.

In their 2.0 architecture, you can have multiple KSM’s and Validation servers with work together for reduncancy.

KSM Installation

Make a working directory, and get the KSM package

mkdir ~/yubikey && cd ~/yubikey
tar xfz yubikey-ksm-1.3.tgz

Install the KSM files

cd yubikey-ksm-1.3
make install

Install Apache2 and PHP

Install Apache2, PHP and MySQL

apt-get install apache2 php5 php5-mcrypt php5-curl mysql-server php5-mysql libdbd-mysql-perl

Create the ykksm table

echo "CREATE DATABASE ykksm;" | mysql -u root -p

Import the DB schema

mysql -u root -p ykksm < /usr/share/doc/ykksm/ykksm-db.sql

Set up some MySQL permissions

CREATE USER 'ykksmreader';
GRANT SELECT ON ykksm.yubikeys TO 'ykksmreader'@'localhost';
SET PASSWORD FOR 'ykksmreader'@'localhost' = PASSWORD('hYea3Inb');

CREATE USER 'ykksmimporter';
GRANT INSERT ON ykksm.yubikeys TO 'ykksmimporter'@'localhost';
SET PASSWORD FOR 'ykksmimporter'@'localhost' = PASSWORD('ikSab29');


Include path configuration

Set the include path by creating a file /etc/php5/conf.d/ykksm.ini

cat > /etc/php5/conf.d/ykksm.ini << EOF
include_path = "/etc/ykksm:/usr/share/ykksm"

Make a web server symlink

make -f /usr/share/doc/ykksm/ symlink

Set your configuration settings in /etc/ykksm/ykksm-config.php

  $db_dsn      = "mysql:dbname=ykksm;host=";
  $db_username = "ykksmreader";
  $db_password = "hYe63Inb";
  $db_options  = array();
  $logfacility = LOG_LOCAL0;

Restart Apache2

/etc/init.d/apache2 restart

Test the KSM Server

Try this URL:

curl 'http://localhost/wsapi/decrypt?otp=dteffujehknhfjbrjnlnldnhcujvddbikngjrtgh'
ERR Unknown yubikey

It should return ‘Unknown Key’ until we have imported our Yubikey into the database.

Install the Yubico Validation Server

The latest version, and documentation can be found at:


Go to our working source directory, and grab the package

cd ~/yubikey

Extract, build and install the server

tar -zxf yubikey-val-2.4.tgz
cd yubikey-val-2.4
make install

Create the ykval database and import the schema

echo 'create database ykval' | mysql -u root -p
mysql -u root -p ykval < /usr/share/doc/ykval/ykval-db.sql

Install the symlink

make symlink

Include path configuration

cat > /etc/default/ykval-queue << EOF

Create a htaccess file: /var/www/wsapi/2.0/.htaccess

RewriteEngine on
RewriteRule ^([^/\.\?]+)(\?.*)?$ $1.php$2 [L]
php_value include_path ".:/etc/ykval:/usr/share/ykval"

Symlink the htaccess file

cd /var/www/wsapi; ln -s 2.0/.htaccess /var/www/wsapi/.htaccess

Copy the template config file for the Validation Server

cp /etc/ykval/ykval-config.php-template /etc/ykval/ykval-config.php

Edit the file and configure settings in /etc/ykval/ykval-config.php


  # For the validation interface.
  $baseParams = array ();
  $baseParams['__YKVAL_DB_DSN__'] = "mysql:dbname=ykval;host=";
  $baseParams['__YKVAL_DB_USER__'] = 'ykvalverifier';
  $baseParams['__YKVAL_DB_PW__'] = 'password';
  $baseParams['__YKVAL_DB_OPTIONS__'] = array();

  # For the validation server sync
  $baseParams['__YKVAL_SYNC_POOL__'] = array("http://localhost/wsapi/2.0/sync");

  # An array of IP addresses allowed to issue sync requests
  # NOTE: You must use IP addresses here.
  $baseParams['__YKVAL_ALLOWED_SYNC_POOL__'] = array("");

  # Specify how often the sync daemon awakens
  $baseParams['__YKVAL_SYNC_INTERVAL__'] = 10;

  # Specify how long the sync daemon will wait for response
  $baseParams['__YKVAL_SYNC_RESYNC_TIMEOUT__'] = 30;

  # Specify how old entries in the database should be considered aborted attempts
  $baseParams['__YKVAL_SYNC_OLD_LIMIT__'] = 10;

  # These are settings for the validation server.
  $baseParams['__YKVAL_SYNC_FAST_LEVEL__'] = 1;
  $baseParams['__YKVAL_SYNC_SECURE_LEVEL__'] = 40;
  $baseParams['__YKVAL_SYNC_DEFAULT_LEVEL__'] = 60;
  $baseParams['__YKVAL_SYNC_DEFAULT_TIMEOUT__'] = 1;

  // otp2ksmurls: Return array of YK-KSM URLs for decrypting OTP for
  // CLIENT.  The URLs must be fully qualified, i.e., contain the OTP
  // itself.
  function otp2ksmurls ($otp, $client) {
    return array("http://localhost/wsapi/decrypt?otp=$otp",);

In the above configuration, we’re only expecting to use one Validation Server and one KSM. If you’re planning on having multiple Validation servers and KSM’s, then you’ll be including the other Validation Servers in the SYNC_POOL, and your KSM’s in the URLs at the bottom, returned by the otp2ksmurls function.

Enable the mod_rewrite

a2enmod rewrite

Create the ykval database user

CREATE USER 'ykvalverifier'@'localhost' IDENTIFIED BY  'password';
GRANT ALL PRIVILEGES ON `ykval`. * TO  'ykvalverifier'@'localhost';

Fix some privileges on our config file

chgrp www-data /etc/ykval/ykval-config.php

The Sync Daemon uses the PEAR module System_Daemon so you need to install it:

apt-get install php-pear
pear install System_Daemon-0.9.2

Install the init.d script

ykval-queue install
update-rc.d -f ykval-queue defaults

Start the daemon

/etc/init.d/ykval-queue start


Use CURL to test our server is working

curl 'http://localhost/wsapi/verify?id=1&otp=vvcnrdkvevtefjbrjnlnldnhcujvddbikngjrtgh'

It should return something like this:


Once we import our Yubikey into the database, we should get a nice ‘status=OK’ message.

Importing your keys into the KSM server

Refer back to the output from personalising your Yubikey. You’ll need the fixed part (referred to as publicname in the DB), internal name (UID) and our AES key.

This is an entry for our newly personalised Yubikey.

USE ykksm;
INSERT INTO `yubikeys` (`serialnr`, `publicname`, `created`, `internalname`, `aeskey`, `lockcode`, `creator`, `active`, `hardware`)
VALUES (101209, 'vvcnrdkvevtj', '2010-05-07 15:18:40', '74657374696e', 'fcaad309a20ne1809c2db2f7f0e8d6ea', '000000000000', '', 1, 1);

This entry is required for our systems to authenticate against the Validation server. I’m not exactly sure about this, as the documentation is somewhat bare. I think you need an administrator-type person’s key details in here. The imporant part is the ID. This values corresponds the the ‘id=’ value in our CURL requests and in our PAM config.

USE ykval;
INSERT INTO `clients`
(`id`, `active`, `created`, `secret`, `email`, `notes`, `otp`)
(1, 1, 1, 'fcaad309a20ne1809c2db2f7f0e8d6ea', 'your@email.addr', 'Any text your want', 'vvcnrdkvevterfbtelvnvkkueenecrlfnlhdjetrhgnk');

We’ll hit our new Validation Server to make sure it’s working

curl "http://localhost/wsapi/2.0/verify?id=1&nonce=askjdnvajsndjkasndvjsnad&otp=vvcnrdkvevtjkreuvvlhtubjecbrticjneckgrigkck"

It should return something like this:


In this URL, we’ve added the ‘nonce’ parameter. This just a test to make sure the v2.0 API is working. ‘status=OK’ means it’s all good! If you get ‘NOT_ENOUGH_ANSWERS’, it means it has trouble trying to sync with other Validation Servers.

We’ll get PAM using our new Validation Servers for auth


auth required /usr/local/lib/security/ id=1 authfile=/etc/yubikeyid url= debug

If you watch /var/log/auth.log, you should see the PAM module spitting out some debugging information which may be useful. It also spits out your plain text password too, while you have the debug option on. Make sure you remove this later.


If you see an error like this:

PAM unable to dlopen(/lib/security/ /lib/security/ undefined symbol: pam_set_data

you’ll need the LD_PRELOAD trick from above. Something to do with dlopening the PAM module I believe.

Posted in Personal at May 20th, 2010. 11 Comments.