Skip to content

Blog

The Investigatory Powers Act

I sincerely hope the UK Government plans to actually debate the “Repeal the new Surveillance laws (Investigatory Powers Act)” petition in Parliament now that it has reached 100,000 signatories, including myself.

Of course, the commitment they made is carefully worded such that attracting that number of signatures merely means it will be “considered” for debate.

Recent events in the United States and elsewhere demonstrate that maintaining the right balance of power between the state and the individual is more important than ever. I would not normally get political here, but the circumstances are anything but normal — the frightening jolt the western world seems to be making towards extreme right-wing authoritarianism means that maintaining that balance is nothing short of absolutely critical.

The list of organisations who can access internet connection records is enormously wide and includes bodies as mundane as the Food Standards Agency! This is way beyond something that could be argued as essential to maintaining the UK’s operational intelligence capabilities for preventing domestic acts of mass violence.

This law would be deeply, deeply troubling at any time, but is even more so as the US election shows us the threat of home-grown extremism that rises through established political bodies and gains the powers of high office.

Personally, I urge everyone to support efforts to mount legal challenges to this legislation.

Please consider supporting organisations like Open Rights Group.

QuickArchiver on Thunderbird — Archiving Messages to the Right Folder with One Click

QuickArchiver icon

Even despite the dominance of webmail, I have long used a traditional desktop email client. I like having a local mail archive should “the cloud” have trouble, as well as the ability to exert control over the user interface and user experience. (That might be partly a euphemism for not having to see ads!)

Apple’s Mail.app built into macOS (going to have to get used to not calling it OS X!) has served me pretty well for quite some time now, alongside Thunderbird when I’m on Linux, and while Mail.app offered the most smooth interface for the platform, it didn’t always have all the features I wanted.

For example, being able to run mail rules is more limited than I wanted in Mail.app. I could have rules run automatically as messages arrived in my inbox, or disable them entirely. But actually how I wanted to use rules was to be able to cast my eye over my inbox, and then bulk archive (to a specific folder) all emails of a certain type if I’d decided none needed my fuller attention.

Recently, I moved to Thunderbird on my Mac for managing email and discovered QuickArchiver.

As well as letting you writing rules yourself, QuickArchiver offers the clever feature of learning which emails go where, and then suggesting the right folder to which that message can be archived with a single click.

It’s still early days, but I am enjoying this. Without spending time writing rules, I’m managing email as before, and QuickArchiver is learning in the background what rules should be offered. The extra column I’ve added to my Inbox is now starting to populate with that one-click link to archive the message to the correct folder!

It’s just a nice little add-on if, like me, you (still??) like to operate in this way with your email.

The Windows 10 Experience

New Windows logo

I haven’t said much about Windows 10 here on this blog, but my day job brings me into contact with it quite extensively.

There is a huge amount about the Windows experience that this release improves, but also there are elements of Microsoft’s new approach to developing and releasing it that are problematic.

The Good

Installing Windows 10 across a variety of devices, it is striking just how much less effort is required to source and install drivers. In fact, in most cases no effort is required at all! Aside from the occasional minor frustration of bloated drivers that are desperate to add startup applications, this makes such a positive difference. Unlike in the past, you can typically just install Windows, connect to a network, and everything will work.

This is particularly notable in any environment where you have a large number of devices with anything more than a little bit of hardware diversity. Previously in an enterprise environment, hunting for drivers, extracting the actual driver files, removing unwanted ‘helper application’ bits and building clean driver packages for deployment was tedious and wasteful of time. Now, much of the time, you let Windows Update take care of the drivers for you over the network, all running in parallel to the actual provisioning process that you have configured!

There are numerous other pockets of the operating system where there really feels like there has been a commitment to improve the user experience, but from my “world of work” experience of the OS, this is the most significant. It’s true as well that many of the criticisms you could make about past versions of Windows no longer apply.

The Bad

I guess that the coalescing of monthly Windows Updates into a single cumulative update helps significantly with the ‘236 updates’ problem with (and atrocious performance of) Windows Update in 7. However, Microsoft’s recent history of updates causing issues (the recent issues with KB3163622 and Group Policy, for example) combined with the inability to apply updates piecemeal leaves some IT departments reluctant to apply the monthly patch. The result, if Microsoft continues to experience these kind of issues, or doesn’t communicate clearly about backwards-incompatible changes, is more insecure systems, which hurts everybody.

This leads me to my other main complaint. There have been reports about the new approach Microsoft is taking with software testing. An army of ‘Insiders’ seem to be providing the bulk of the telemetry and feedback now, but my concern is that this testing feedback doesn’t necessarily end up being representative of the all of the very diverse groups of Windows users. Particularly when deploying Windows 10 in an Enterprise environment, it has felt at times like we are the beta testers! When one update is a problem, you then have to put people at risk by rejecting them all. 🙁

(Yes, there is LTSB, but it hangs back a very long way on features!)

The Ugly

Windows 10 'Hero' image

At least you can turn it off on the login screen officially now. 🙂

Reverse Proxying ADFS with Nginx

In my recent trials and tribulations with ADFS 3.0, I came up against an issue where we were unable to host ADFS 3.0 with Nginx as one of the layers of reverse proxy (the closest layer to ADFS).

When a direct connection, or a cURL request, was made to the ADFS 3.0 endpoints from the machine running Nginx, all seemed well, but as soon as you actually tried to ferry requests through a proxy_pass statement, users were greeted with HTTP 502 or 503 errors.

The machine running ADFS was offering up no other web services — there was no IIS instance running, or anything like that. It had been configured correctly with a valid TLS certificate for the domain that was trusted by the certificate store on the Nginx machine.

It turns out that despite being the only HTTPS service offered on that machine through HTTP.sys, you need to explicitly configure which certificate to present by default. Apparently, requests that come via Nginx proxy_pass are missing something (the SNI negotiation?) that allows HTTP.sys to choose the correct certificate to present.

So, if and only if you are sure that ADFS is the only HTTPS service you are serving up on the inner machine, you can force the correct certificate to be presented by default, which resolves this issue and allows the Nginx reverse proxied requests to get through.

With that warning given, let’s jump in to what we need to do:

Retrieve the correct certificate hash and Application ID

netsh http show sslcert

You’ll need to note the appid and the certificate hash for your ADFS 3.0 service.

Set the certificate as the default for HTTP.sys

We’ll use netsh‘s interactive mode, as I wasn’t in the mood to figure out how to escape curly brackets on Windows’ command line!

You want the curly brackets literally around the appid, but not the certhash.

netsh
netsh> http
netsh http> add sslcert ipport=0.0.0.0:443 appid={appid-from-earlier} certhash=certhash-from-earlier

Verify the proxy_pass settings

Among other configuration parameters, we have the following in our Nginx server stanza for this service:

proxy_redirect off;
proxy_http_version 1.1;
proxy_request_buffering off;
proxy_set_header X-MS-Proxy the-nginx-machine;
proxy_set_header Host the-hostname-in-question

And, with that, we were successfully reverse proxying ADFS 3.0 with Nginx. 🙂

Forms-based ADFS 3.0 Endpoints Inexplicably Showing HTTP 503

Azure Active Directory logo

As with many other organisations, at my day job we are using the Office 365 service for email, contacts and calendars. There are a few ways to integrate 365 with your local Active Directory, and we have been using Active Directory Federation Services (ADFS) 3.0 for handling authentication: users don’t authenticate on an Office-branded page, but get redirected after entering their email addresses to enter their passwords on a page hosted at our organisation.

We also use the Azure AD Connect tool (formerly called Azure AD Sync, and called something else even before that) to sync the directory with the cloud, but this is only for syncing the directory information — we’re not functionally using password sync, which would allow people to authenticate at Microsoft’s end.

We recently experienced an issue where, suddenly, the endpoints for ADFS 3.0 that handle forms-based sign in (so, using a username and password, rather than Integrated Windows Authentication) were returning a HTTP 503 error. The day before, we had upgraded Azure AD Sync to the new Azure AD Connect, but our understanding was that this shouldn’t have a direct effect on ADFS.

On closer examination of the 503 issue, we would see errors such as this occurring in the AD FS logs:

There are no registered protocol handlers on path /adfs/ls/ to process the incoming request.

The way that the ADFS web service endpoints are exposed is through the HTTP.sys kernel-mode web serving component (yeah, it does sound rather crazy, doesn’t it) built into Windows.

One of the benefits of this rather odd approach is that multiple different HTTP serving applications (IIS, Web Application Proxy, etc.) can bind to the the same port and address, but be accessed via a URL prefix. It refers to these as ‘URL ACLs’.

To cut a very long story short, it emerged eventually that the URL ACLs that bind certain ADFS endpoints to HTTP.sys had become corrupted (perhaps in the process of uninstalling an even older version of Directory Sync). I’m not even sure they were corrupted in the purely technical sense of the word, but they certainly weren’t working right, as the error message above suggests!

Removing and re-adding the URL ACLs in HTTP.sys, granting permissions explicitly to the user account which is running the ‘Active Directory Federation Services’ Windows service allowed the endpoints to function again. Users would see our pretty login page again!

netsh http delete urlacl url=https://+:443/adfs/
netsh http add urlacl url=https://+:443/adfs/ user=DOMAINACCOUNT\thatisrunningadfs

We repeated this process for other endpoints that were not succeeding and restarted the Active Directory Federation Services service.

Hurrah! Users can log in to their email again without having to be on site!

This was quite an interesting problem that had me delving rather deeply into how Windows serves HTTP content!

One of the primary frustrations when addressing this issue was that a lot of the documentation and Q&A online is for the older release of ADFS, rather than for ADFS 3.0! I hope, therefore, that this post might help save some of that frustration for others who run into this problem.

Isn’t it funny that so frequently it comes back to “turn it off, and turn it back on again”? 🙂

Staying Safe

I have written on this subject before, but as suspected, surveillance is back on Parliament’s agenda again.

Is the Investigatory Powers Bill the latest attempt at a “modernising” of existing laws and conventions, as is often claimed, or an unprecedented extension of surveillance powers?

I would argue strongly that the capability for your local council, tax enforcement authorities, and the myriad of other agencies that are proposed to have access to this data, to ‘see’ every thought you might have dared to research online is vastly more than would have been possible in human history. It’s also vastly more than any other country has sought the legal power to access.

Photo by Luz on Flickr. Licensed under CC-BY.

Photo by Luz on Flickr. Licensed under CC-BY.

Given what we know in a post-Snowden era, this proposed legislation is quite clearly not about ensuring a continued intelligence flow for the purposes of national security. That has been going on behind closed doors, away from any democratic process and meaningful oversight, for many years, and will no doubt continue. Whether or not the activities of military intelligence agencies have a strong legal foundation has apparently not stopped them from gathering what they need to do their job. It is important for me to note that I don’t doubt the hard work they do, and the success they have had over the last ten years in preventing violence in the UK. However, we know that overreach and abuse have occurred — at the kind of scale that undermines the very values our government and their agencies are there to protect.

It is clear to me that, given the secret and ‘shady’ nature of much of the activities of the security apparatus of perhaps every nation state, what we do not need to do as a democratic society is provide a strong legal protection for such morally ambiguous acts. If a tactic is invasive or aggressive, but genuinely necessary in a “lesser of two evils” sense, the fact that the actor has to take on the liability for it provides an inherent safeguard. If it is easy and low risk to employ that tactic, there is a stronger temptation for its abuse, or for its inappropriate extension into everyday investigations. When these laws are ‘sold’ to the people as being for national security and to keep us safe from violence, it cannot be acceptable that the powers are made available to other agencies for any other purposes, as the Bill proposes.

A nation state does not have the right to violate the sanctity of the boundary of someone’s home without strong justification — a warrant. A nation state similarly does not have the right to violate that boundary in the form of bulk data collection on an entire populace. The Internet connections we open and the data we transfer is something that we can keep private from our government, unless due process is followed to override that on an individual basis.

That must remain. That principle must be protected, or we’ve forgotten why we bother with this ‘free country’ thing.

It must be protected even when we face short- and medium-term risks to our safety. Why? Because it is not hyperbole to say that failing to do so lays the technical and legal foundations of a police state, which is a much more significant long-term risk.

Fortunately, there are many fighting against this Bill, which (even if you disagree with my arguments above) is widely regarded to be completely unfit for purpose.

I wholeheartedly support the Don’t Spy on Us campaign and its six principles, and I stand with them as a supporter of the Open Rights Group, one of the organisations behind the campaign.

Apple Event Brain Dump

Some very raw and unfiltered thoughts on today’s Apple announcement:

I thought after all this time there’d be more content deals for the new Apple TV — the “apps” focus suggests that they are having to concede their former approach entirely and acknowledge that they won’t funnel much TV content through iTMS at all.

Harry Potter photos!

4K video capture on a phone is pretty amazing.

The MLB Apple TV app demo with watching two games at once must have been a Back to the Future II reference for 2015 — “give me channels 5, 9…”. Right? Right?

I’m interested to see (hopefully non-fanboy/girlish) thoughts on how the iPad Pro will compare with the Surface range. It’s interesting to me actually that if MS get the touchy style apps done well (they have to do a better job than with Win8!), the Surfaces also having the flexibility to run classic Windows apps too might make them more competitive in that “pro tablet” area.

I want to go play with 3D Touch when I can! If they’ve done it well, it could be quite cool.

I’m no artist in drawing terms, but the Pencil looked pretty amazing. I was deriding it at first as a silly stylus. I was wrong.

So. Much. Stuff!

Adventures in SELinux

Security Enhanced Linux (SELinux) is a pretty powerful technology that adds another layer of access control to a Linux system. It helps significantly limit the ability for an attacker who has partly compromised a system to use their access to jump deeper into the system.

It has been standard in Red Hat Enterprise Linux and its derivatives for quite some time, and is often the cause of many a headache when something doesn’t work because it is being (apparently) silently blocked by SELinux’s security enhancements!

Its potential to cause breakage, especially when third-party bits and pieces are brought into the mix, means the advice from well-meaning individuals is often a cry of “just turn off SELinux!”, rendering a system without that extra layer of protection.

I will not pretend that my recent dealings with SELinux in CentOS 7 have been free of frustration, but a few simple tools have made it a surprisingly simple affair to get something up and running again if a particular behaviour (always of something a bit third-party in my experience!) is being erroneously blocked.

I think a big part of what makes SELinux get switched off in frustration is the perception that it is breaking things silently, and the psychological impact of its verbose ‘scariness’ when you do find those logs!

audit2why

As long as you remember that SELinux can be the cause of potential unexplained weirdness, your first port of call can be audit2why:

audit2why -a

What is particularly nice about this tool is how quickly you get (semi-)human readable output, detailing which rules an application is breaking. If you do hit one of these ‘weird problems’, a quick trip to this tool usually makes it clear that SELinux is the cause of the failure!

audit2allow

I was surprised by how relatively quick it was to identify an issue with audit2why and make a custom module with audit2allow to get an application working again.

There are a good set of instructions in the Red Hat manual.

It sounds like a big deal, but the tools have made it almost completely automated — it really isn’t necessary to have a deep understanding of SELinux’s internal workings.

setsebool

Finally, there are some flags that are disabled by default for SELinux protected applications that you might need. Again, audit2why will often make it clear what you need to toggle using this tool!

For example, a web server process that does legitimately send out emails might need the appropriate flag switched on. Without it, the web server doesn’t get the right to talk to sendmail.

Give SELinux Another Chance!

I suppose my point in this rambling is this: give SELinux a chance if you have given up on it in the past. If you have the time to set up your system properly (and you should!), taking a little extra to figure out how to grant only the permissions really needed does make a material difference to your security should one application be compromised. An attacker being able to get their foot in the door needs to be assumed to be possible, so making their life a lot harder at that point is worth making your life slightly harder on the odd occasion.

With a little patience and the use of the tools I have talked about, I think it is a lot easier to work with than it might seem at first glance, or when it first arrived in RHEL many moons ago!

My Alternative Christmas Wish

Conflict minerals 5 by Sasha Lezhnev of the Enough Project - http://www.enoughproject.org/

Conflict minerals 5 by Sasha Lezhnev of the Enough Project

Licensed under CC-BY-ND 2.0

I am always interested to know where the products I buy come from, and at this particularly consumer-focused time of the year, it highlights the issue further. It is interesting to me just how complex the chains of dependencies involved in making any non-trivial ‘thing’ really are.

The computer I am writing this on was assembled in the UK, but I would suggest that most, or all, of its components were not. What about the suppliers who provided sub-components for those components? What about the raw materials, including the traces of rare earth minerals needed to do its electronic magic?

Unfortunately, the result of this enormous complexity, and the fact that the retailers from which we buy care about little but the price they pay, means it is very difficult to verify really important aspects of how the goods we consume were made. It seems that nobody, not even the retailers at this end of the chain, has the depth of insight into their supply chain needed to affirmatively say “this is how our product was made”.

So, when cost is the primary concern, and nobody really digs deep to understand what is happening at each stage of a product’s life, how can consumers at this side of the transaction be empowered to make more ethical choices?

If I go to my local big-name supermarket to buy a kettle, for example, I cannot look at all the options and make an assessment about which product was made in a way that best aligns with my values. Did the manufacture of the cheapest £5 kettle involve the exploitation of somebody? Probably. Is the £30 ‘luxury’ choice any better? We just don’t know.

There are pockets of hope in this area — initiatives like Fairtrade have, for some product categories, encouraged supermakets to go ‘all Fairtrade’ for particular items, and for other companies (tea and coffee businesses being a good example) to take steps to at least appear to be sourcing more ethically.

I just wish there were a big push from somewhere to gather accurate intelligence about how our stuff is made, and begin labelling more products in a way that empowers consumers to make better choices. I think there is a good portion of the population who want to make better choices that support human rights, environmental protection and social progress, but without high quality, verifiable information about what goes into what we consume, we are in the dark. While we remain uninformed, we cannot exert pressure on the market to do better as a whole.

Serving Pi

I recently completed a physical migration of my server, the one that hosts this very page! It all went successfully, and without any noticeable downtime for this site, which I am pleased to be able to do.

There was, however, a period of time during which this server needed to be physically switched off and moved to the new location. To enable zero downtime, something would need to be able to host the server during that period.

Enter my Raspberry Pi!

Raspberry Pi in box

This amazing little thing is capable of running Raspbian, a modified version of Debian, which means I get access to the rich library of Debian packages that are available. I have a private Git repository containing a modular set of Puppet manifests. These describe the exact configuration of this server, so by applying the Puppet manifests, I can spin up a new instance of this particular server’s configuration on a whim.

So, I dusted off an SD card that was lying around, dropped Raspbian on it, and installed Puppet and Git and applied the manifests.

If I’m honest, there were a few components that weren’t quite so happy to run, despite packages being available. Varnish didn’t seem to like my VCL file, so I had to run the site here directly with Nginx pointing to PHP-FPM instead.

To cut a long story short, it worked! I was successfully serving up this site, from the Pi, using (almost) my existing configuration. Performance was not stellar, even compared to the modest hardware that normally serves this page, with page load times about 10 times slower than uncached page loads normally would be. The main blog page did take 1.5 seconds to render! For the short time I needed it though, I was very happy to have a very inexpensive and easy solution.