‘at risk’, by booleansplit / Robert S. Donovan on Flickr
Scott Charney of Microsoft’s ‘Trustworthy Computing’ effort wrote a blog post recently discussing the threats presented by botnets and other malware installed on users machines, where the user is unaware of or apathetic about the presence of that software.
Just as when an individual who is not vaccinated puts others’ health at risk, computers that are not protected or have been compromised with a bot put others at risk and pose a greater threat to society. In the physical world, international, national, and local health organizations identify, track and control the spread of disease which can include, where necessary, quarantining people to avoid the infection of others. Simply put, we need to improve and maintain the health of consumer devices connected to the Internet in order to avoid greater societal risk. To realize this vision, there are steps that can be taken by governments, the IT industry, Internet access providers, users and others to evaluate the health of consumer devices before granting them unfettered access to the Internet or other critical resources.
I have argued previously against the “there’s nothing important on my computer, so I don’t care” response that some have to the discovery of malware on their machines, and I certainly believe that it is an irresponsible attitude that contributes to these greater threats.
But I am concerned about some of the solutions which Scott proposes — particularly those that might seek to create legislation and obligations on individual computer users.
Access to the internet and digital technology has become more and more important in recent years. Many public services here in the UK are only fully accessible online, for example, and with countries such as Finland and Estonia even declaring internet access to be a ‘human right’, we must approach this issue with that in mind. Cutting off people’s access to those online services, whether under the pretext of merely alleged copyright infringement, or because their machine is deemed to be infected, has serious implications for that individual’s freedoms, and, as I mentioned, even potentially their human rights.
We must examine carefully to what extent people have a right to run their computers how they like. For example — I make a conscious and deliberate decision not to run anti-virus software on some of my computers. I don’t think I would advise anyone with a non-technical background to do the same — but I do so because there actually are benefits to not being burdened by protection software. I am, of course, extraordinarily careful about what software I install on those computers and what configuration I leave them in. I am confident that those systems are ‘clean’ because I know exactly what I did to them, and I regularly inspect them to ensure that they are still doing what I asked of them, and nothing more.
Those computers are mine. I choose what software to run on them, and what software not to run on them. To what extent would mandatory ‘health check’ software have a right to be on my machine if I said no? Under what conditions and obligations would I be placed to prove to the relevant authority that my computer was ‘clean’ before I could access resources?
What impact might this idea have on free and open source software and/or alternative software platforms, where the legal and technical parameters may be very different to those of traditional consumer computing devices?
Perhaps more importantly — what would happen in another scenario, where a non-technical user is informed their machine is not ‘clean’ enough to access the network, but the ‘health check’ software is unable to remove the threat? Would that user be blocked from accessing a network until they hire someone to fix their computer? What if that computer user could not afford that or was otherwise unable to remove the threat? What if that user was therefore denied access to very important information and services that are only delivered online?
Admittedly, this sort of mandatory control and enforcement of computer health isn’t something Scott, or Microsoft, are proposing happen right away. My scenarios are all subject to extrapolation and some speculation on my part.
Voluntary behavior and market forces are the preferred means to drive action but if those means fail, then governments should ensure these concepts are advanced.
[…]
Cyber security policy and corresponding legislation is being actively discussed in many nations around the world and there is a huge opportunity to promote this Internet health model. As part of this discussion, it is important to focus on building a socially acceptable model. While the security benefits may be clear, it is important to achieve those benefits in a way that does not erode privacy or otherwise raise concern.
Nevertheless, this issue brings to the forefront issues such as the freedom of all users of technology to use their devices in the way they see fit, without interruption or intrusion, and the impact that access restrictions in the name of computer health might have on those already digitally disadvantaged in society.
The threat presented by the security problems of modern computer systems is absolutely a serious issue that has much wider implications than many may immediately think. It isn’t just our individual computers any more — it affects the global economy and the vital infrastructure of the modern world (power grids, emergency services, payment systems…). Any technical and legislative solutions that are proposed need to be examined for the impact they have on the rights and privacy of individuals and businesses, and any structures of social responsibility we build around computer security need to be supported by the majority of a population.
That’s why I think education has to be the first place to start all this. I’ve said this before and I will say it again. We must start teaching people about computer and technology security differently. It’s not just “install an anti-virus program” here and “look for the padlock” there. People need to be made more aware of issues of trust around what software they install and which services they choose to use. People, as is being suggested by Scott and Microsoft Trustworthy Computing, need to be made aware of the responsibility they have to maintain their computer (just as a car must be road-worthy, a computer needs to be net-worthy). And people need to be made aware that just as the consequence of an unroadworthy car could mean someone innocent gets injured or killed on the road, the consequence of poor security on their own computers and devices is a threat not only on them, but on others — online businesses who are threatened by a DDoS attack, or perhaps even the very infrastructure on which their country, society and way of life depends.
If we start with education — changing the message about computer security that the geeks send to the non-geeks, we will take the first steps towards making ordinary people more aware of these issues, and perhaps make them more willing to help us in addressing them.
Photo is ‘at risk’, by booleansplit / Robert S. Donovan, taken from Flickr. The photo is licensed under CC BY 2.0.
Post a Comment