Skip to content


DfontSplitter 0.4 for Windows

I’m delighted to announce DfontSplitter 0.4 for Windows. After a nine-year hiatus without software updates, this release has big under-the-bonnet changes!

The application is now built with .NET 4.7.2 and runs on Windows 7 – Windows 10. If you still need support going back as far as Windows 98(!), you can still use the old version.

A new, improved, fondu (which does the bulk of the work) is bundled as a DLL that is Windows-native and no longer requires the Cygwin library. It also includes a number of memory safety improvements.

To fix the long-standing issue where extracted TTFs didn’t quite play nicely with Windows, DfontSplitter 0.4 for Windows embeds functionality from FontForge to do some final conversion work to make your fonts work perfectly with Windows.

Source is available on GitHub (DfontSplitter, fondu-win-dll)

The “T with chisel” DfontSplitter icon is licensed under the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. The icon includes a modified version of “Chisel wood 24mm” by Isabelle Grosjean, which is also licensed as such.

QuickArchiver on Thunderbird — Archiving Messages to the Right Folder with One Click

QuickArchiver icon

Even despite the dominance of webmail, I have long used a traditional desktop email client. I like having a local mail archive should “the cloud” have trouble, as well as the ability to exert control over the user interface and user experience. (That might be partly a euphemism for not having to see ads!)

Apple’s built into macOS (going to have to get used to not calling it OS X!) has served me pretty well for quite some time now, alongside Thunderbird when I’m on Linux, and while offered the most smooth interface for the platform, it didn’t always have all the features I wanted.

For example, being able to run mail rules is more limited than I wanted in I could have rules run automatically as messages arrived in my inbox, or disable them entirely. But actually how I wanted to use rules was to be able to cast my eye over my inbox, and then bulk archive (to a specific folder) all emails of a certain type if I’d decided none needed my fuller attention.

Recently, I moved to Thunderbird on my Mac for managing email and discovered QuickArchiver.

As well as letting you writing rules yourself, QuickArchiver offers the clever feature of learning which emails go where, and then suggesting the right folder to which that message can be archived with a single click.

It’s still early days, but I am enjoying this. Without spending time writing rules, I’m managing email as before, and QuickArchiver is learning in the background what rules should be offered. The extra column I’ve added to my Inbox is now starting to populate with that one-click link to archive the message to the correct folder!

It’s just a nice little add-on if, like me, you (still??) like to operate in this way with your email.

Reverse Proxying ADFS with Nginx

In my recent trials and tribulations with ADFS 3.0, I came up against an issue where we were unable to host ADFS 3.0 with Nginx as one of the layers of reverse proxy (the closest layer to ADFS).

When a direct connection, or a cURL request, was made to the ADFS 3.0 endpoints from the machine running Nginx, all seemed well, but as soon as you actually tried to ferry requests through a proxy_pass statement, users were greeted with HTTP 502 or 503 errors.

The machine running ADFS was offering up no other web services — there was no IIS instance running, or anything like that. It had been configured correctly with a valid TLS certificate for the domain that was trusted by the certificate store on the Nginx machine.

It turns out that despite being the only HTTPS service offered on that machine through HTTP.sys, you need to explicitly configure which certificate to present by default. Apparently, requests that come via Nginx proxy_pass are missing something (the SNI negotiation?) that allows HTTP.sys to choose the correct certificate to present.

So, if and only if you are sure that ADFS is the only HTTPS service you are serving up on the inner machine, you can force the correct certificate to be presented by default, which resolves this issue and allows the Nginx reverse proxied requests to get through.

With that warning given, let’s jump in to what we need to do:

Retrieve the correct certificate hash and Application ID

netsh http show sslcert

You’ll need to note the appid and the certificate hash for your ADFS 3.0 service.

Set the certificate as the default for HTTP.sys

We’ll use netsh‘s interactive mode, as I wasn’t in the mood to figure out how to escape curly brackets on Windows’ command line!

You want the curly brackets literally around the appid, but not the certhash.

netsh> http
netsh http> add sslcert ipport= appid={appid-from-earlier} certhash=certhash-from-earlier

Verify the proxy_pass settings

Among other configuration parameters, we have the following in our Nginx server stanza for this service:

proxy_redirect off;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_set_header X-MS-Proxy the-nginx-machine;
proxy_set_header Host the-hostname-in-question

And, with that, we were successfully reverse proxying ADFS 3.0 with Nginx. 🙂

Forms-based ADFS 3.0 Endpoints Inexplicably Showing HTTP 503

Azure Active Directory logo

As with many other organisations, at my day job we are using the Office 365 service for email, contacts and calendars. There are a few ways to integrate 365 with your local Active Directory, and we have been using Active Directory Federation Services (ADFS) 3.0 for handling authentication: users don’t authenticate on an Office-branded page, but get redirected after entering their email addresses to enter their passwords on a page hosted at our organisation.

We also use the Azure AD Connect tool (formerly called Azure AD Sync, and called something else even before that) to sync the directory with the cloud, but this is only for syncing the directory information — we’re not functionally using password sync, which would allow people to authenticate at Microsoft’s end.

We recently experienced an issue where, suddenly, the endpoints for ADFS 3.0 that handle forms-based sign in (so, using a username and password, rather than Integrated Windows Authentication) were returning a HTTP 503 error. The day before, we had upgraded Azure AD Sync to the new Azure AD Connect, but our understanding was that this shouldn’t have a direct effect on ADFS.

On closer examination of the 503 issue, we would see errors such as this occurring in the AD FS logs:

There are no registered protocol handlers on path /adfs/ls/ to process the incoming request.

The way that the ADFS web service endpoints are exposed is through the HTTP.sys kernel-mode web serving component (yeah, it does sound rather crazy, doesn’t it) built into Windows.

One of the benefits of this rather odd approach is that multiple different HTTP serving applications (IIS, Web Application Proxy, etc.) can bind to the the same port and address, but be accessed via a URL prefix. It refers to these as ‘URL ACLs’.

To cut a very long story short, it emerged eventually that the URL ACLs that bind certain ADFS endpoints to HTTP.sys had become corrupted (perhaps in the process of uninstalling an even older version of Directory Sync). I’m not even sure they were corrupted in the purely technical sense of the word, but they certainly weren’t working right, as the error message above suggests!

Removing and re-adding the URL ACLs in HTTP.sys, granting permissions explicitly to the user account which is running the ‘Active Directory Federation Services’ Windows service allowed the endpoints to function again. Users would see our pretty login page again!

netsh http delete urlacl url=https://+:443/adfs/
netsh http add urlacl url=https://+:443/adfs/ user=DOMAINACCOUNT\thatisrunningadfs

We repeated this process for other endpoints that were not succeeding and restarted the Active Directory Federation Services service.

Hurrah! Users can log in to their email again without having to be on site!

This was quite an interesting problem that had me delving rather deeply into how Windows serves HTTP content!

One of the primary frustrations when addressing this issue was that a lot of the documentation and Q&A online is for the older release of ADFS, rather than for ADFS 3.0! I hope, therefore, that this post might help save some of that frustration for others who run into this problem.

Isn’t it funny that so frequently it comes back to “turn it off, and turn it back on again”? 🙂


About a month ago (whoops!), I released another open source project into the wild, SaveTimer.

This was one of those “wouldn’t that be a cool idea” moments that spontaneously resulted in a modest little project. The whole thing was conceived, built and published in the space of a few hours!

Save Timer

SaveTimer screenshot

Notify a user if they have not saved in a ‘watch directory’ for a certain interval.

Basic Description

This is a very simple application, written in C#/.NET 4.5.2, which observes a specified ‘watch directory’ on a given interval. The most recent file in the watch directory is examined to determine its last modified time. If this is older than the specified interval time, the user is shown a message reminding them to save their work. The user can suppress the messages for an indefinite period of time by right-clicking the icon in the ‘clock box’/system tray and choosing ‘Stop reminding me’.

This was written to support academic examination access arrangements, where users are intentionally only given access to a cut-down word processor such as WordPad, without spellcheck support. Unfortunately, WordPad does not autosave, so this application provides a regular reminder for the user to save. In this usage, the user is given a blank mapped drive to save in. In addition to the regular save reminders, the application also ensures that the user has saved in the correct directory to avoid data loss and ensure compliance with controlled conditions of where they must save.

SaveTimer logo

SaveTimer logo (the Dashicons clock, licensed under GPLv2 or later with font exception)

At the risk of sounding immodest, one of the most enjoyable things about this project was jumping right back into the C#/.NET environnment, with which I have spent less time recently, and discovering that I still had all of the intuition of how to build the functionality I desired. Perhaps this is testament more to Visual Studio’s IntelliSense suggestions and the simplicity of the application, rather than my memory, but it nevertheless was a rewarding feeling to quickly go from zero to an app that does a specific task quite well!

I’m also pleased to say it ran in… shall we say, production… without causing any issues. If it saves one piece of work, I think it will be worth it!

SaveTimer is released under the GNU GPLv3 or later. The code is available on GitHub and you can also download a ready-to-run executable, if you have .NET 4.5.2 installed. No installer required!

Teaching Computer Security Basics

Over the past few years, I have ended up coming into contact with many computers belonging to individuals. My reason for doing so has varied, but usually I am helping them with something unrelated to security.

I found myself constantly saying the same things when I noticed bad security practices — “you really should update or remove Java”, “you need to actually stop clicking ‘Postpone’ and restart the computer some time”, “untick that box to install the toolbar” and so on.

Computer security is hard.

But, particularly when it comes to computers belonging to individuals, we have let the perfect become the enemy of the good. We have allowed anti-virus vendors to parrot messages about “total protection” instead of teaching sound principles and encouraging good practice.

Computer security, at least in this context, is in large part a human problem, not a technology problem.

So, a while ago, I had an idea to put together a really quick, 5-minute presentation that would encourage computer security principles that could dramatically lower the risk of individuals’ machines getting infected. I stripped it down to what I saw as the four most important principles (few enough that they might actually be remembered!):

  1. Keep software up-to-date — with emphasis on the importance of updates, despite the inconvenience, and mention the high-risk software titles du jour whose updates may not be entirely hands-off (Flash, Java, etc.).
  2. Keep up-to-date antivirus — with emphasis on such technology as the last line of defence, not ever a solution in and of itself.
  3. Install software from trusted sources — perhaps the most important principle that requires behaviour change, this is about getting people to feel confident enough to build a trust model for software and then make informed decisions about each and every installation they make.
  4. Be suspicious — in particular about communications that invite clicking on things and so on, including using alternative channels to verify legitimacy of things that look suspicious (e.g. never clicking unexplained links!)

I’ve not given this talk yet, but I’d like to. It feels that computer security on home PCs is, in general, so awful, that even a very basic set of ideas that are memorable enough to implement can probably make a significant difference to the health of our personal information infrastructure.

I would welcome feedback from others on these slides, as well as the idea.

I think it is quite important to keep it to five minutes, make it concise enough that it will be memorable and actionable, but I’m sure this idea can (and needs to) evolve and improve over time.

If you would like to use the slides, feel free to do so under the Creative Commons BY-NC-SA 2.0 licence. It would be great if many people could hear this message.

Initial Thoughts on the Windows 8 Developer Preview

Windows 8 'Headlines', showing RSS feed headlines for my blog

I was interested to take a look at the new publicly-available developer preview of Windows 8 that was released today. I have a few (poorly organised and still unrefined) initial thoughts.

After an initial hiccup running the developer preview in VMware, I switched over to a machine with VirtualBox and got up and running. The installation process was impressively speedy, even under the virtualised conditions, and asked few questions. A good start.

Initially, it is a little disconcerting not to have the desktop right in front of you after logging in, but I suspect that with a little retraining, the new ‘Start’ screen might prove a more convenient starting interface. The Windows Phone-style ‘tiles’ interface is genuinely innovative (praise I rarely would find myself directing at Microsoft) and seems to work in a fairly intuitive way.

I should mention at this point that my virtual machine setup and ‘traditional’ hardware combination mean that only a mouse and keyboard were available, making it impossible for me to evaluate the touch features of the OS (and making some of the ‘Metro’ apps and UI a little difficult to use). This is, of course, a limitation of my configuration, but it also raises an important point — if this new Metro UI will be the default even for computers with no touch capabilities, the whole thing needs to be smooth, optimised and not at all frustrating for this category of users too. It doesn’t feel this way yet — having to perform awkward drag gestures with a mouse isn’t a good experience at all. The viability of having a single operating system, with shared UI concepts, on very different types of computing devices is something that is yet to be proven.

Internet Explorer 10's 'Metro' interface, showing this website

These issues aside, I find myself quite impressed at how well the combination of the new ‘Metro’ apps themselves work alongside the traditional desktop. The disparity between the two types of apps was something I thought might make the system feel clunky and ‘part-baked’, but I find myself likening it to the Mission Control view in Mac OS X Lion — the Metro apps are like your Lion apps in Full-Screen Mode, and you still have access to the traditional desktop over to the left. In short, I actually think it works.

There are certainly some minor oddities at this stage — and obviously this is far from a finished, polished product. But there is promise in this hybrid-UI design that I hadn’t expected to find. I certainly need to spend a bit more than a short hour playing with the system before I’ll really understand what I think of its potential.

The biggest challenge will be how well a single operating system will work on very different types of computing devices — and indeed whether the hardware and software on the new generation of Windows tablet devices will be up to the task.

DfontSplitter for Windows 0.3.1

DfontSplitter logo

“What? I thought you updated this yesterday?”

Well, I did. 😛

Hot on the heels of yesterday’s auto-update-capable release, is DfontSplitter for Windows 0.3.1. This version includes a single fix, introducing a new method of avoiding the dreaded ‘corrupt font file’ error. For some unknown reason, sometimes Windows simply will refuse to work with the original fondu output file, but if simply DfontSplitter makes a duplicate of the file, it will happily see it as a TrueType font! It is very odd behaviour, and this fix only works in some cases, but it should reduce the incidence of ‘corrupt font files’ being output from DfontSplitter for Windows. This means users will less frequently have to go through a secondary hoop to get Windows to play nicely with DfontSplitter’s outputs.

Here are the official release notes:

New Features and Bugfixes

  • Uses a new method to decrease the incidence of ‘invalid font file’ errors on Windows. More fonts should now convert correctly without requiring further intervention.

Known Issues

  • Some fonts still require further conversion after DfontSplitter has created the TrueType font file. FontForge is one option for this.

As always, you can always get the latest and greatest version of DfontSplitter by downloading it from the the DfontSplitter project page.

DfontSplitter for Windows 0.3

DfontSplitter logo

I have just released a new version of DfontSplitter for Windows, version 0.3. The main change here is a brand new automatic update notification system. Like the Mac version, which uses the excellent Sparkle Framework, users of DfontSplitter for Windows can now keep the application up-to-date without having to manually check the website. This makes my development of the software easier, as I can release smaller feature releases more frequently, rather than large releases that must have a longer lifespan.

Unfortunately, because the automatic update feature is new, previous users of DfontSplitter 0.2 are not going to be notified automatically about this new release. 🙁

If you know any other users of DfontSplitter for Windows, please let them know this update is available so they might have the opportunity to keep up-to-date with this new feature too.

Here are the official release notes for this version:

New Features and Bugfixes

  • New automatic update facility, similar to that of DfontSplitter for Mac. Users can now be notified of new releases in the future, which may include new features.

Known Issues

As always, you can always get the latest and greatest version of DfontSplitter by downloading it from the the DfontSplitter project page.

Three Years of Mac

My 13-inch white MacBook on the day it arrived

This month marks three years since I purchased my white MacBook, my first Mac computer. Other than the AppleCare coverage stopping (good job they just replaced my battery, yay!), this represents quite a milestone in my technological life.

I have always had a passion for playing with anything and everything when it comes to technology. I am not satisifed merely to find a technology solution, I am excited and highly motivated to seek out the best solution that meets the specification in the best way and then to understand it and know everything about it.

My interest in the Mac was born from this insatiable desire to understand everything. The Mac was, little over three and a half years ago, much a mystery. Having explored the Windows and Linux worlds extensively, the Mac was the last place in desktop computing that I really hadn’t looked into in great detail.

Over the last three years, I have found that my investment in the Mac has proved worthwhile. Mac OS X has ended up being my primary platform for desktop computing. While I still spend time working in the Windows and Linux worlds and enjoy discovering and learning about the new things happening there, the Mac has been a big focus for me in recent years.

So I ask myself — objectively, why has the Mac become my primary desktop platform?

  • Mac OS X is a Unix operating system. This has a number of advantages, but it mainly means rock-solid reliability (in theory at least) and a decent way to interact with the machine via the command line.
  • It is elegant and put together with passion and care. Some bits of software, especially third-party driver and hardware support software for other platforms, aren’t. They are hacked together at the last minute and at low budget, just to work. Almost everything that ships with the Mac and a lot of third-party stuff for it is just done in this fundamentally different way of building stuff you would be proud to show off.
  • It ‘just works’. Often dismissed as hyperbole, this marketing phrase more often than not is true on the Mac. There are notable exceptions and a few annoying things that you don’t get with generic PC hardware as well, but most of the time, you plug something in, or switch something on for the first time and it just does what it is supposed to.
  • Generally speaking, you get what you pay for. Apple don’t make cheap computers. But neither do I think they make overpriced ones. You pay a premium price for an Apple computer, but you get a fair return for that price in terms of the quality of the product. Again, it comes back to the point about passion — Apple will not ship something that they are not entirely happy with, so what you get is something that meets their high standards.

Having said all that, I am still very interested in using everything and anything. While the Mac may be where my primary focus is on the desktop for now and the forseeable future, I am still very much interested in what is going on in the Linux desktop and Windows worlds and you can be sure I’ll continue playing with all sorts of technology in the future.

Here’s to the next three years of Mac — and perhaps beyond!