Skip to content

Blog

Staying Safe

I have written on this subject before, but as suspected, surveillance is back on Parliament’s agenda again.

Is the Investigatory Powers Bill the latest attempt at a “modernising” of existing laws and conventions, as is often claimed, or an unprecedented extension of surveillance powers?

I would argue strongly that the capability for your local council, tax enforcement authorities, and the myriad of other agencies that are proposed to have access to this data, to ‘see’ every thought you might have dared to research online is vastly more than would have been possible in human history. It’s also vastly more than any other country has sought the legal power to access.

Photo by Luz on Flickr. Licensed under CC-BY.

Photo by Luz on Flickr. Licensed under CC-BY.

Given what we know in a post-Snowden era, this proposed legislation is quite clearly not about ensuring a continued intelligence flow for the purposes of national security. That has been going on behind closed doors, away from any democratic process and meaningful oversight, for many years, and will no doubt continue. Whether or not the activities of military intelligence agencies have a strong legal foundation has apparently not stopped them from gathering what they need to do their job. It is important for me to note that I don’t doubt the hard work they do, and the success they have had over the last ten years in preventing violence in the UK. However, we know that overreach and abuse have occurred — at the kind of scale that undermines the very values our government and their agencies are there to protect.

It is clear to me that, given the secret and ‘shady’ nature of much of the activities of the security apparatus of perhaps every nation state, what we do not need to do as a democratic society is provide a strong legal protection for such morally ambiguous acts. If a tactic is invasive or aggressive, but genuinely necessary in a “lesser of two evils” sense, the fact that the actor has to take on the liability for it provides an inherent safeguard. If it is easy and low risk to employ that tactic, there is a stronger temptation for its abuse, or for its inappropriate extension into everyday investigations. When these laws are ‘sold’ to the people as being for national security and to keep us safe from violence, it cannot be acceptable that the powers are made available to other agencies for any other purposes, as the Bill proposes.

A nation state does not have the right to violate the sanctity of the boundary of someone’s home without strong justification — a warrant. A nation state similarly does not have the right to violate that boundary in the form of bulk data collection on an entire populace. The Internet connections we open and the data we transfer is something that we can keep private from our government, unless due process is followed to override that on an individual basis.

That must remain. That principle must be protected, or we’ve forgotten why we bother with this ‘free country’ thing.

It must be protected even when we face short- and medium-term risks to our safety. Why? Because it is not hyperbole to say that failing to do so lays the technical and legal foundations of a police state, which is a much more significant long-term risk.

Fortunately, there are many fighting against this Bill, which (even if you disagree with my arguments above) is widely regarded to be completely unfit for purpose.

I wholeheartedly support the Don’t Spy on Us campaign and its six principles, and I stand with them as a supporter of the Open Rights Group, one of the organisations behind the campaign.

Cautious Unattended Upgrades

I’m very excited to have put the ideas mentioned in my previous blog post about Cautious Unattended Upgrades into practice!

To quickly recap, the idea is that, on a Debian-based test system (‘the canary’), this is a software package that runs the latest security updates, runs an automated browser-based test suite to make sure these new updates have not broken any critical functionality on our clients’ sites, then ‘pushes’ just these package updates to the production servers.

In keeping with my original plan, the software is written in Ruby and uses Watir/Selenium WebDriver to run a suite of tests that verify, just as a human being would in a live web browser, that client websites work correctly.

A canary — as in ‘the canary in the coal mine’

Cautious Unattended Upgrades — the canary in the coal mine. Image by stevep2008 on Flickr, licensed under CC-BY 2.0.

I was expecting the biggest challenge would be getting this browser automation side of things working, but actually that proved very easy, which is a testament to the design of those projects.

The software is still a little rough around the edges, as I explain in the README file on GitHub, but I’m very pleased with the project’s progress. We have put it into use on our live systems at Van Patten Media, so we can keep servers promptly up-to-date with security patches without our intervention, but retain a greater peace of mind that our clients’ sites are still working as they should post-upgrade. (This is of course dependent on the quality and breadth of the tests that we write!)

I am particularly excited that this marks the first ‘real’ project in Ruby that I have written. Ruby isn’t a platform I have worked with too extensively before, so I have enjoyed challenging myself to be exposed to a different environment and quickly pick up how to achieve what I want to do. There is definitely more work to do — it really should be organised in a slightly more ‘Ruby-like’ way, and perhaps become a proper Ruby Gem, listed on rubygems.org, so those are things I will be looking at over the longer term for this project.

If you are interested in using Cautious Unattended Upgrades, or contributing to making it better, the project is licensed under a BSD-style licence and the code is available on its GitHub project page.

Automating Security Updates… Cautiously

Broken padlock

Effraction by Sébastien Launay on Flickr. Licensed under CC-BY 2.0.

My attention has turned recently to how to automate the installation of security updates on various Linux distributions.

As Van Patten Media runs more servers, the effort and time needed to apply critical security updates promptly grows. Waiting several days to get security fixes just isn’t acceptable in a post-Shellshock era, yet there is always a risk of a completely automated update breaking important functionality.

One of the projects I will be investigating over these few weeks is how we might build an automated test environment that could apply the updates quickly to a test VM, run a test suite to verify none of our critical client functionality breaks, then push those updates to the live servers.

There are various solutions for truly automatic updates; I focus on Debian’s unattended-upgrades package here. What seems to be more difficult, however, is being able to push that list of ‘approved’ packages and just install those when we are ready.

My current train of thought on how to proceed on this is as follows:

  • Test box installs the day’s upgrades
  • Runs the test suite automatically
  • If the test suite passes, it determines which packages were most recently installed
  • Pushes that/those package names to the unattended-upgrades whitelist on the clients
  • Clients, on next unattended-upgrades run, will install those upgrades
  • Upon successful upgrade, we reset the whitelist

I am in the very early stages of looking at this, so that is a very rough sketch of where my thoughts are currently. There are missing pieces, but I was looking at Watir for the browser test suite component.

I would be interested to hear from anyone who has looked at this before, or if anyone knows of any interesting similar projects I haven’t found!

The Changing Face of Vulnerability News

Heartbleed logo  Shellshock Logo

The recent news about the bash vulnerability being called “ShellShock”, and the degree to which it is getting mainstream press has got me thinking about how software vulnerabilities are now being reported in the mainstream media.

Apparently, no vulnerability these days is complete without a catchy name and logo — see Heartbleed and Shellshock! Joking aside, though, the very fact that these vulnerabilities are making non-tech news headlines puts pressure on everyone running potentially vulnerable systems to do their duty — usually as straightforward as running a pre-packaged security update.

The Heartbleed and Shellshock stories are taking the place of what we used to see reserved for particularly influental computer worms, like Sasser and Mydoom. It’s most definitely positive that some vulnerabilities are getting attention — unfortunately it is still the case that for some companies and system administrators, only outside pressure will convince them to promptly, diligently and consistently apply security updates.

What I’d like to see, is some way for people interested in improving computer security, the “good guys” for lack of a better term, to leverage this media interest to send a message to system administrators that it’s always necessary to apply software updates promptly, even when they don’t get on the TV news!

The Curse of The Black Box

The other key issue that Shellshock highlights, as did Heartbleed, is the issue of embedded ‘black box’ systems that might be vulnerable. This kind of system is everywhere — and because in many cases they are ‘set it and forget it’ machines, they represent a particular risk. It’s often very difficult to convince vendors of these systems of the importance of pushing upstream software updates down to end users, particularly when there is a lack of understanding and a lack of financial incentive.

Something big and mainstream, like Shellshock and Heartbleed, might convince system administrators to badger vendors to release patches for this kind of product, but we need to extend this further, and make it a social (or even a legal) expectation on vendors to supply security updates for any product they ship, for a reasonable lifetime period for that product.

The security landscape is too complex, and everything too interconnected, for anyone to have the opinion that “I don’t need to patch that, because there’s nothing important on it”.

Leaving Yourself in the Loop

I want to part with a few bullet points, with some actions I try to take to stay up-to-date. Automatic updates are increasingly common, but not universal, and these simple things can help you not miss a known vulnerability.

  • Document and understand the whole software footprint of the systems for which you are responsible. (This means embedded systems, software libraries, and more!)
  • Subscribe to announce mailing lists, follow Twitter accounts of the software projects and systems you use. (It pays to be in the know about available updates, and not hear about them after it is too late!)
  • Look for useful vulnerability resources for particular projects you use. (For example, for WordPress, the recently launched WPScan Vulnerability Database.)

Teaching Computer Security Basics

Over the past few years, I have ended up coming into contact with many computers belonging to individuals. My reason for doing so has varied, but usually I am helping them with something unrelated to security.

I found myself constantly saying the same things when I noticed bad security practices — “you really should update or remove Java”, “you need to actually stop clicking ‘Postpone’ and restart the computer some time”, “untick that box to install the toolbar” and so on.

Computer security is hard.

But, particularly when it comes to computers belonging to individuals, we have let the perfect become the enemy of the good. We have allowed anti-virus vendors to parrot messages about “total protection” instead of teaching sound principles and encouraging good practice.

Computer security, at least in this context, is in large part a human problem, not a technology problem.

So, a while ago, I had an idea to put together a really quick, 5-minute presentation that would encourage computer security principles that could dramatically lower the risk of individuals’ machines getting infected. I stripped it down to what I saw as the four most important principles (few enough that they might actually be remembered!):

  1. Keep software up-to-date — with emphasis on the importance of updates, despite the inconvenience, and mention the high-risk software titles du jour whose updates may not be entirely hands-off (Flash, Java, etc.).
  2. Keep up-to-date antivirus — with emphasis on such technology as the last line of defence, not ever a solution in and of itself.
  3. Install software from trusted sources — perhaps the most important principle that requires behaviour change, this is about getting people to feel confident enough to build a trust model for software and then make informed decisions about each and every installation they make.
  4. Be suspicious — in particular about communications that invite clicking on things and so on, including using alternative channels to verify legitimacy of things that look suspicious (e.g. never clicking unexplained links!)

I’ve not given this talk yet, but I’d like to. It feels that computer security on home PCs is, in general, so awful, that even a very basic set of ideas that are memorable enough to implement can probably make a significant difference to the health of our personal information infrastructure.

I would welcome feedback from others on these slides, as well as the idea.

I think it is quite important to keep it to five minutes, make it concise enough that it will be memorable and actionable, but I’m sure this idea can (and needs to) evolve and improve over time.

If you would like to use the slides, feel free to do so under the Creative Commons BY-NC-SA 2.0 licence. It would be great if many people could hear this message.

Valuing Corporate Values

Much is said about Google’s “don’t be evil” corporate motto. That is not what this post is about.

This is about corporate values — and a (rather smaller) company I have found myself appreciating because of their words and actions on the subject. This stuff can be easily overlooked when the market demands a rush to the lowest price, but to consumers like myself, it is possibly the most important thing.

This isn’t some murky sponsored post (although I do have an affiliate link at the bottom) — this is all genuine and from the heart.

Cloak

Cloak logo

I found out about Cloak through their co-branding with 1Password, my password manager of choice. They are a VPN service designed to give you a way to encrypt your traffic when you are connected to untrusted networks. Their service is technically brilliant, but what is more important than that is the honesty, openness and realism they have shown so far in their communications.

At first I felt a little apprehensive about their corporate values and how well they were upheld in practice. Their privacy policy was scant in detail — using claims along the lines of “we don’t store any of your data”, but with an exception of data that they’d need “to make sure you’re not sending out spam”.

Well, what does that mean?

» Read the rest of this post…

Cleaning up the IP.Board url4short mess

XDebug to the rescue…

The condensed, I-just-want-to-fix-my-site version:

On your server, try:

grep ‐ri \$mds /wherever/your/website/folder/is

to locate the injected code, and while file it resides in. You can then go into that file and remove it.

Also try re-caching all the skins and languages in the Admin Control Panel. Make sure all IP.Board updates and patches are applied to prevent the compromise happening again.

Reset your passwords and keys. Take measures to detect and continue detecting other infiltrations.

My friend Niall Brady dropped me an email, saying that some of the users of his Windows-Noob forums were reporting getting redirected to a spammy-looking site (url4short dot info) when clicking on search results to the site.

The forums run the Invision Power Board (IP.Board) software. There had been some reports of vBulletin boards being hit with this kind of spammy redirect, but fewer suggestions that this was an IPB problem. There had been a patch for a critical IPB issue released in December, but that had, obviously, been applied to the site as part of normal good practice.

Nevertheless, I was concerned. Clicking on a search engine result should definitely not be redirect somewhere other than the result page!

Without evidence that the issue was not limited to one machine, or one connection, however, it could not be ruled out that it was just malware on that visitor’s machine.

» Read the rest of this post…

Protecting your browsing with Certificate Patrol for Firefox

I read this BBC News story about mistakenly issued security certificates recently, which allowed the people with those certificates to impersonate any Google websites and intercept traffic to them. It struck me as quite significant that this particular story made it to &#8216mainstream’ tech reporting.

There is a more detailed, and perhaps more accurate, commentary on this attack on Freedom to Tinker. It perhaps may not have been ‘cyber criminals’ as the BBC reported it when I first viewed the story!

Anyway, given the attention to this issue, I thought it a good opportunity to review this kind of attack against SSL/TLS — the security system upon which we all now depend. More importantly, I wanted to show Certificate Patrol, an add-on for Firefox that would allow you to notice a suspicious change to an certificate and thwart this kind of attack.

The weaknesses inherent in having too many organisations that are able to issue security certificates for any domain are becoming more clear. While this kind of attack is extremely rare, at the moment, ‘at the moment’ is a very poor security response! Hopefully, more awareness of these limitations of the internet’s authentication infrastructure can help put pressure on browser vendors, website owners and CAs to make everyone more secure.

Balancing the Risks of the Communications Data Bill

Like many of my generation, I have grown up expecting the rich benefits, and accepting the unique risks, an open and free internet presents.

I do not pretend that this medium for open exchange does not, at times, facilitate truly terrible things. I do not deny that the lives and security of millions of people are, at times, put at risk by this platform.

Behind the Black Boxes

I seek to urge balance — balance between the competing demands of liberty and security. A balance of the risk of crime that is faciliated by free communications with installing tools and technology that could so easily be abused by those charged with protecting our safety.

The UK government, like many others at this point in time, is planning to introduce widespread surveillance of internet communications, in the form of the Communications Data Bill.

They will collect the ‘communications data᾿, the metadata of with whom we communicate, through traditional channels like email, but also through any number of third-party services. This will require them to employ deep packet inspection on all our internet traffic to extract the data that they would, under the Bill, be lawfully allowed to store.

But:

  • Who will make the ‘black boxes᾿ that will be doing all the collecting?
  • If the ‘black boxes᾿ are necessarily hunting through the whole communication for the communications metadata, how can we be sure that content will not also be collected?
  • Who will have access to these machines?
  • Are their access controls going to be subject to penetration testing by well-respected security researchers, so we can be confident that our data will remain under the control of the designated officials?
  • How do we know that the boxes have not been, and will not be, altered to do more than their original lawful task?

Our Patterns are Private

… unless there is good reason to suspect us of committing an offence.

One of the things that troubles me greatly is the proposal of storing the address of every website we visit. We are promised that the addresses of individual pages will not form part of this data, but nevertheless, an extraordinary amount of information can be inferred from a list of websites that an individual has visited.

Sensitive political information, medical details we have a right not to disclose, valuable commercial intelligence…

The more data that is accumulated, the more an abusive or corrupt agent can infer, and the more damage they could do. The information about where we go online also has a very high commercial value (as many internet companies are already well aware), making the likelihood of illicit commercial exploitation of this government-held data by rogue officials vastly higher.1

Unless there is reason to believe we have done something wrong, we have a right to withhold this information.

We should resist routine collection and storage of this information where there is no suspicion of wrongdoing.

The Balance of Power

Protecting citizens against risks to their safety is obviously a priority, and clearly a huge challenge. Many people devote their lives to doing so, and many people have made significant sacrifices in pursuit of security. I deeply respect these people, and the need for this work.

We must, however, limit the power we entrust to those who protect us. There will always be some who are liable to corruption, and some intent on harming us whilst purporting to do the opposite.

We have to balance the risk posed by ‘others᾿ — criminals, terrorists, rogue states, with the risk posed by those inside the system who may exploit us with it.

Unfortunately, the abuse of power is made much more efficient where technology is involved.

The wide, sweeping powers of surveillance that the Bill mandates afford dangerous levels of power that are all too easily turned against us. We might trust this government and the software they put on the black boxes that watch all our traffic. What about the next one, and the software they load onto these machines? What if a group much less trustworthy are able to seize these powers in the future? What if the collection and storage technology itself is fundamentally insecure?

It is much easier to resist these overbroad powers now, than to try and re-balance rights and risks later.

Think

I ask that if you do nothing else, spend a little time thinking about these balances of power, and balances of risk.

If, like me, you read between the lines of the Bill and find these balances troublingly one-sided, then write to your MP and write about this issue. Make your voice heard.

Rightful liberty is unobstructed action according to our will within limits drawn around us by the equal rights of others. I do not add ‘within the limits of the law’ because law is often but the tyrant’s will, and always so when it violates the rights of the individual.

— Thomas Jefferson.

1: The UK᾿s recent huge press scandal has highlighted the issue of corrupt law enforcement officials giving privileged access and preferential treatment to private media companies. It is naïve to believe that this risk will not present itself again. The best way to protect against this kind of corruption and exploitation is to limit the collection of and access to our private data.

On Phone ‘Apps’ and Risk

I just came across an interesting post on the ESET Threat Blog (ESET being the antivirus vendor who are responsible for NOD32) about smartphone apps and the risk they potentially pose in a world when we install all sorts of applications, including those that deal with important and sensitive information, on the same device.

In particular, General Hayden remarks that ‘In the popular culture, the availability of 10,000 applications for my smart phone is viewed as an unalloyed good. It is not — since each represents a potential vulnerability. But if we want to shift the popular culture, we need a broader flow of information to corporations and individuals to educate them on the threat. To do that we need to recalibrate what is truly secret.’

Yes, each app that you install on your smartphone is a potential vulnerability. It is precisely for that reason you should be making decisions about what you installed based upon rational thought processes. There are some things that the reward is not great enough to warrant the amount of risk taken. For example, you might choose not to drive 120 MPH (193 KPH) because the cost of potentially getting isn’t worth the benefit of arriving sooner, or perhaps even the benefit of the fun of driving so fast. If you do choose to drive that fast where it is not permitted, and you do get caught, you may discover that the consequences are so extreme you wish you hadn’t have taken the chance.

When it comes to installing software on your smartphone, take a good look at what you may be risking. Do you do online banking or shopping with your smartphone? Do you have business contacts? Contact for friends? How about access to an email account with private emails? All of the information may be compromised if the wrong app is installed. After you identify what assets you have and their value, then consider the app you are installing. What is the benefit it poses to you? Is it worth potentially risking your information for a funny picture or a game you might play a couple of times a year and can probably play online, rather than installing it on your smartphone?

It’s an interesting read — and should remind everyone using an app-capable mobile device that it is a powerful computer, and with that comes a certain degree of risk. While the major smartphone software platforms have a higher level of technical separation between apps running on the same device than you typically get with a desktop PC, we should still be thinking about what apps are sharing ‘the floor’ with others, especially those which deal with more sensitive information, like mobile banking.