Skip to content

Blog

Running SpinRite 6.0 on a Mac

SpinRite logo

SpinRite is a fantastic tool for repairing and maintaining hard drives, and I am proud to say that its purchase price has been more than recouped on drives that it has brought back into service that would otherwise have needed replacing!

Running it on an Intel Mac hasn’t been possible with version 6.0. It actually boots fine, but there is no way to give keyboard input, and thus there is no way to kick off a scan.

Reports that people had succeeded at getting SpinRite to work on various weird and wonderful platforms indirectly, using VirtualBox and its raw disk access mode, led me to experiment with this to run SpinRite on a Mac. This is particularly useful on iMacs where pulling the hard drive out of the case is… undesirable(!)

This is an advanced, technical process.

Performing the wrong operations when you have raw access to the disk, a technique this process uses, can cause you to lose data. You must have a backup.

Obviously, I do not accept any responsibility and cannot help if you break things by using these notes. Hard hats must be worn beyond this point. All contractors must report to the site office.

Boot from another disk

You’ll need a working MacOS install on another disk that you can boot from, as we need to unmount all the volumes on the disk to be scanned in order to gain raw access to the disk. I use SuperDuper to make bootable backups, and these work great for this purpose too.

Prepare the Environment

Make sure you have VirtualBox installed, with the optional Command Line Tools.

Turn off screen savers, sleep timers and screen lock, just in case the VM has taken keyboard input away from you and you are unable to unlock the Mac to check on SpinRite’s progress. It’s certainly not an ideal situation to have to pull the plug on the computer while that VM has raw access to your target disk!

Identify the Target Disk

It is critical that you identify the BSD device name for the whole disk that you want to operate on. In my case, I’d booted from disk1 and the SpinRite target disk was disk0.

Determine the correct disk identifiers with:

diskutil list

diskutil list

» Read the rest of this post…

Morning

QuickArchiver on Thunderbird — Archiving Messages to the Right Folder with One Click

QuickArchiver icon

Even despite the dominance of webmail, I have long used a traditional desktop email client. I like having a local mail archive should “the cloud” have trouble, as well as the ability to exert control over the user interface and user experience. (That might be partly a euphemism for not having to see ads!)

Apple’s Mail.app built into macOS (going to have to get used to not calling it OS X!) has served me pretty well for quite some time now, alongside Thunderbird when I’m on Linux, and while Mail.app offered the most smooth interface for the platform, it didn’t always have all the features I wanted.

For example, being able to run mail rules is more limited than I wanted in Mail.app. I could have rules run automatically as messages arrived in my inbox, or disable them entirely. But actually how I wanted to use rules was to be able to cast my eye over my inbox, and then bulk archive (to a specific folder) all emails of a certain type if I’d decided none needed my fuller attention.

Recently, I moved to Thunderbird on my Mac for managing email and discovered QuickArchiver.

As well as letting you writing rules yourself, QuickArchiver offers the clever feature of learning which emails go where, and then suggesting the right folder to which that message can be archived with a single click.

It’s still early days, but I am enjoying this. Without spending time writing rules, I’m managing email as before, and QuickArchiver is learning in the background what rules should be offered. The extra column I’ve added to my Inbox is now starting to populate with that one-click link to archive the message to the correct folder!

It’s just a nice little add-on if, like me, you (still??) like to operate in this way with your email.

WordPress, Custom Field Suite and the WP REST API as a Middleware Platform

WordPress logo

Over the last five years or so, I’ve worked a lot with WordPress — developing custom plugins as well as piecing together pre-existing components to build (hopefully) really great websites.

But WordPress is more than just a blogging tool, and can be more just a tool for websites.

My most recent WordPress-related endeavour has been in my day job.

I have been looking at taking various bits of information about business processes that thus far have been disparate and disconnected and structuring and centralising that information so it can be more useful.

I’ve been using custom post types in WordPress for different types of information. Custom Field Suite makes describing the metadata we want to store a breeze, and effortlessly provides a beautiful and usable interface for “mere mortals” to input and manipulate the data later in the WP-Admin interface.

I work in an education environment; a simple example of one of these entities is the lunch menu (formerly just a Word document with no meaningful machine-readable structure at all). This was a nice, easy and public entity to start with.

So, we have a:

  • Custom post type for a lunch menu
  • A Custom Field Suite field group attached to the custom post type
  • Members plugin to control read and write access to that custom post type 

The final piece of the puzzle is using the WP REST API to be able to expose this data to other systems.

With a very small amount of code, the REST API can be convinced to enable access to these custom entities — and of course we still retain WordPress’ access control (with a little help from the Members plugin) to ensure we’re not too free with our data!

Now we have somewhere where non-technical users can go to input data and the ability then to export that data through the REST API into any other application. Because we’ve formalised the structure of the information, we have the flexibility to display it in all sorts of different ways that are appropriate for the medium.

So our lunch menu can be:

  • Exposed via the web
  • Displayed on a screen in public areas
  • And more!

The lunch menu design was an exciting proof of concept of the idea. I’m now moving on to slightly more ambitious projects which involve using a little bit of custom ‘glue’ in PowerShell (but whichever programming language is appropriate could be used!) to write data from other external systems into WordPress for later use.

Getting information out of big proprietary information systems using their provided tools that require… shall we say patience… has been a challenge. But, once liberated, this information is now stored, structure, and now can be queried simply and securely for all sorts of uses.

Back in 2011 when I started developing for WordPress with Chris from Van Patten Media, I remember thinking to myself, “yeah, I can probably figure this out”. It perhaps wouldn’t have been so obvious then that building a skill set with a ‘blogging tool’ would prove useful five years later in a quite different context, but this is testament to the versatility of the WordPress platform and what it has become!

The Windows 10 Experience

New Windows logo

I haven’t said much about Windows 10 here on this blog, but my day job brings me into contact with it quite extensively.

There is a huge amount about the Windows experience that this release improves, but also there are elements of Microsoft’s new approach to developing and releasing it that are problematic.

The Good

Installing Windows 10 across a variety of devices, it is striking just how much less effort is required to source and install drivers. In fact, in most cases no effort is required at all! Aside from the occasional minor frustration of bloated drivers that are desperate to add startup applications, this makes such a positive difference. Unlike in the past, you can typically just install Windows, connect to a network, and everything will work.

This is particularly notable in any environment where you have a large number of devices with anything more than a little bit of hardware diversity. Previously in an enterprise environment, hunting for drivers, extracting the actual driver files, removing unwanted ‘helper application’ bits and building clean driver packages for deployment was tedious and wasteful of time. Now, much of the time, you let Windows Update take care of the drivers for you over the network, all running in parallel to the actual provisioning process that you have configured!

There are numerous other pockets of the operating system where there really feels like there has been a commitment to improve the user experience, but from my “world of work” experience of the OS, this is the most significant. It’s true as well that many of the criticisms you could make about past versions of Windows no longer apply.

The Bad

I guess that the coalescing of monthly Windows Updates into a single cumulative update helps significantly with the ‘236 updates’ problem with (and atrocious performance of) Windows Update in 7. However, Microsoft’s recent history of updates causing issues (the recent issues with KB3163622 and Group Policy, for example) combined with the inability to apply updates piecemeal leaves some IT departments reluctant to apply the monthly patch. The result, if Microsoft continues to experience these kind of issues, or doesn’t communicate clearly about backwards-incompatible changes, is more insecure systems, which hurts everybody.

This leads me to my other main complaint. There have been reports about the new approach Microsoft is taking with software testing. An army of ‘Insiders’ seem to be providing the bulk of the telemetry and feedback now, but my concern is that this testing feedback doesn’t necessarily end up being representative of the all of the very diverse groups of Windows users. Particularly when deploying Windows 10 in an Enterprise environment, it has felt at times like we are the beta testers! When one update is a problem, you then have to put people at risk by rejecting them all. 🙁

(Yes, there is LTSB, but it hangs back a very long way on features!)

The Ugly

Windows 10 'Hero' image

At least you can turn it off on the login screen officially now. 🙂

Reverse Proxying ADFS with Nginx

In my recent trials and tribulations with ADFS 3.0, I came up against an issue where we were unable to host ADFS 3.0 with Nginx as one of the layers of reverse proxy (the closest layer to ADFS).

When a direct connection, or a cURL request, was made to the ADFS 3.0 endpoints from the machine running Nginx, all seemed well, but as soon as you actually tried to ferry requests through a proxy_pass statement, users were greeted with HTTP 502 or 503 errors.

The machine running ADFS was offering up no other web services — there was no IIS instance running, or anything like that. It had been configured correctly with a valid TLS certificate for the domain that was trusted by the certificate store on the Nginx machine.

It turns out that despite being the only HTTPS service offered on that machine through HTTP.sys, you need to explicitly configure which certificate to present by default. Apparently, requests that come via Nginx proxy_pass are missing something (the SNI negotiation?) that allows HTTP.sys to choose the correct certificate to present.

So, if and only if you are sure that ADFS is the only HTTPS service you are serving up on the inner machine, you can force the correct certificate to be presented by default, which resolves this issue and allows the Nginx reverse proxied requests to get through.

With that warning given, let’s jump in to what we need to do:

Retrieve the correct certificate hash and Application ID

netsh http show sslcert

You’ll need to note the appid and the certificate hash for your ADFS 3.0 service.

Set the certificate as the default for HTTP.sys

We’ll use netsh‘s interactive mode, as I wasn’t in the mood to figure out how to escape curly brackets on Windows’ command line!

You want the curly brackets literally around the appid, but not the certhash.

netsh
netsh> http
netsh http> add sslcert ipport=0.0.0.0:443 appid={appid-from-earlier} certhash=certhash-from-earlier

Verify the proxy_pass settings

Among other configuration parameters, we have the following in our Nginx server stanza for this service:

proxy_redirect off;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_set_header X-MS-Proxy the-nginx-machine;
proxy_set_header Host the-hostname-in-question

And, with that, we were successfully reverse proxying ADFS 3.0 with Nginx. 🙂

Forms-based ADFS 3.0 Endpoints Inexplicably Showing HTTP 503

Azure Active Directory logo

As with many other organisations, at my day job we are using the Office 365 service for email, contacts and calendars. There are a few ways to integrate 365 with your local Active Directory, and we have been using Active Directory Federation Services (ADFS) 3.0 for handling authentication: users don’t authenticate on an Office-branded page, but get redirected after entering their email addresses to enter their passwords on a page hosted at our organisation.

We also use the Azure AD Connect tool (formerly called Azure AD Sync, and called something else even before that) to sync the directory with the cloud, but this is only for syncing the directory information — we’re not functionally using password sync, which would allow people to authenticate at Microsoft’s end.

We recently experienced an issue where, suddenly, the endpoints for ADFS 3.0 that handle forms-based sign in (so, using a username and password, rather than Integrated Windows Authentication) were returning a HTTP 503 error. The day before, we had upgraded Azure AD Sync to the new Azure AD Connect, but our understanding was that this shouldn’t have a direct effect on ADFS.

On closer examination of the 503 issue, we would see errors such as this occurring in the AD FS logs:

There are no registered protocol handlers on path /adfs/ls/ to process the incoming request.

The way that the ADFS web service endpoints are exposed is through the HTTP.sys kernel-mode web serving component (yeah, it does sound rather crazy, doesn’t it) built into Windows.

One of the benefits of this rather odd approach is that multiple different HTTP serving applications (IIS, Web Application Proxy, etc.) can bind to the the same port and address, but be accessed via a URL prefix. It refers to these as ‘URL ACLs’.

To cut a very long story short, it emerged eventually that the URL ACLs that bind certain ADFS endpoints to HTTP.sys had become corrupted (perhaps in the process of uninstalling an even older version of Directory Sync). I’m not even sure they were corrupted in the purely technical sense of the word, but they certainly weren’t working right, as the error message above suggests!

Removing and re-adding the URL ACLs in HTTP.sys, granting permissions explicitly to the user account which is running the ‘Active Directory Federation Services’ Windows service allowed the endpoints to function again. Users would see our pretty login page again!

netsh http delete urlacl url=https://+:443/adfs/
netsh http add urlacl url=https://+:443/adfs/ user=DOMAINACCOUNT\thatisrunningadfs

We repeated this process for other endpoints that were not succeeding and restarted the Active Directory Federation Services service.

Hurrah! Users can log in to their email again without having to be on site!

This was quite an interesting problem that had me delving rather deeply into how Windows serves HTTP content!

One of the primary frustrations when addressing this issue was that a lot of the documentation and Q&A online is for the older release of ADFS, rather than for ADFS 3.0! I hope, therefore, that this post might help save some of that frustration for others who run into this problem.

Isn’t it funny that so frequently it comes back to “turn it off, and turn it back on again”? 🙂

Staying Safe

I have written on this subject before, but as suspected, surveillance is back on Parliament’s agenda again.

Is the Investigatory Powers Bill the latest attempt at a “modernising” of existing laws and conventions, as is often claimed, or an unprecedented extension of surveillance powers?

I would argue strongly that the capability for your local council, tax enforcement authorities, and the myriad of other agencies that are proposed to have access to this data, to ‘see’ every thought you might have dared to research online is vastly more than would have been possible in human history. It’s also vastly more than any other country has sought the legal power to access.

Photo by Luz on Flickr. Licensed under CC-BY.

Photo by Luz on Flickr. Licensed under CC-BY.

Given what we know in a post-Snowden era, this proposed legislation is quite clearly not about ensuring a continued intelligence flow for the purposes of national security. That has been going on behind closed doors, away from any democratic process and meaningful oversight, for many years, and will no doubt continue. Whether or not the activities of military intelligence agencies have a strong legal foundation has apparently not stopped them from gathering what they need to do their job. It is important for me to note that I don’t doubt the hard work they do, and the success they have had over the last ten years in preventing violence in the UK. However, we know that overreach and abuse have occurred — at the kind of scale that undermines the very values our government and their agencies are there to protect.

It is clear to me that, given the secret and ‘shady’ nature of much of the activities of the security apparatus of perhaps every nation state, what we do not need to do as a democratic society is provide a strong legal protection for such morally ambiguous acts. If a tactic is invasive or aggressive, but genuinely necessary in a “lesser of two evils” sense, the fact that the actor has to take on the liability for it provides an inherent safeguard. If it is easy and low risk to employ that tactic, there is a stronger temptation for its abuse, or for its inappropriate extension into everyday investigations. When these laws are ‘sold’ to the people as being for national security and to keep us safe from violence, it cannot be acceptable that the powers are made available to other agencies for any other purposes, as the Bill proposes.

A nation state does not have the right to violate the sanctity of the boundary of someone’s home without strong justification — a warrant. A nation state similarly does not have the right to violate that boundary in the form of bulk data collection on an entire populace. The Internet connections we open and the data we transfer is something that we can keep private from our government, unless due process is followed to override that on an individual basis.

That must remain. That principle must be protected, or we’ve forgotten why we bother with this ‘free country’ thing.

It must be protected even when we face short- and medium-term risks to our safety. Why? Because it is not hyperbole to say that failing to do so lays the technical and legal foundations of a police state, which is a much more significant long-term risk.

Fortunately, there are many fighting against this Bill, which (even if you disagree with my arguments above) is widely regarded to be completely unfit for purpose.

I wholeheartedly support the Don’t Spy on Us campaign and its six principles, and I stand with them as a supporter of the Open Rights Group, one of the organisations behind the campaign.

Whiskers

One of the reasons I do love the camera on my iPhone. It truly is remarkable the enormous power we carry in our pockets!

Happy Leap Day from this random cat!

Close-up of whiskers and nose of domestic cat

Photo by Peter Upfold, available under Creative Commons Attribution-Noncommercial-Share Alike 2.0 UK: England & Wales License, if you’d like to use it. My view in this case is that including the photo in a larger publication should not invoke the “Share Alike” clause, but modifications to the photo itself should.

DfontSplitter 0.4.2 for Mac — Critical Security Update

DfontSplitter icon

Today I release DfontSplitter 0.4.2 for Mac. This is a critical security update that fixes an issue relating to the Sparkle software update framework when the update pages are served over HTTP. As of 0.4.2, the update pages are now, naturally, served over HTTPS. (It was more than five years ago when the last release was made!)

The vulnerability means that in a scenario where an attacker could launch a man-in-the-middle attack during a Sparkle-enabled app’s update detection process, arbitrary JavaScript could execute in the WebView hosting the release notes. Due to the context that the WebView runs in, the app could then be convinced to run local files, expose local files to a remote server and even execute arbitrary code. More details and a full breakdown are at the post on Vulnerable Security.

This update fixes the Sparkle-related security issue by updating Sparkle and requiring HTTPS for all future DfontSplitter app update communications. Due to new build requirements in Xcode 7.2, the application now requires at least OS X Snow Leopard (10.6) and a 64-bit Intel processor.

The automatic updates feature within DfontSplitter should detect the update, but you can also download and install it manually.

Thanks to Kevin Chen for pointing out the existence of the issue with Sparkle and that it affected DfontSplitter. I had somehow missed the original reporting of the vulnerability, so I particularly appreciate Kevin bringing this to my timely attention.

The astute among you may note that in the Info.plist for this update, I explicitly disable the OS X 10.11 SDK’s check for HTTPS forward secrecy in the HTTPS communications to the update server. Once I figure out a cipher suite configuration that I am happy with, and understand, in Pound (my reverse proxy acting as the TLS terminus), I will update the app again to require forward secrecy.