Saturday, February 27, 2016

Consultation response to "Keeping Children Safe in Education: proposed changes"

Profs. Ian Brown and Douwe Korff, February 2016

1.              We have only just learned of the consultation, for which we apologise. The comments below are as a result written quickly to reflect our main concerns.

2.              We were both co-authors (with others) of the 2006 report “Children’s Databases – Safety and Privacy”, written by the Foundation for Information Policy Research (FIPR) for the Information Commissioner and available here:

We believe the report – although written in a different time and context – contains many observations that are relevant to the consultation, and include it here by reference.

3.              We will focus on the proposed duty of schools and other educational establishments for under-18s to monitor online activities by students, as set out in paragraph 75 of the Draft Statutory Guidelines, as follows:

As schools and colleges increasingly work online it is essential that children are safeguarded from potentially harmful and inappropriate online material. As such governing bodies and proprietors should ensure appropriate filters and appropriate monitoring systems are in place. Children should not be able to access harmful or inappropriate material from the school or colleges IT system. Governing bodies and proprietors should be confident that systems are in place that will identify children accessing or trying to access harmful and inappropriate content online. Guidance on e-security is available from the National Education Network- NEN. Guidance on procuring appropriate ICT is available at: Buying ICT advice for schools.

4.              We fear in particular that the above-mentioned duty of schools to have “systems ... in place that will identify children accessing or trying to access harmful and inappropriate content online” will be read by many school “governing bodies and proprietors” as requiring them to monitor the online activities of their students continuously and in detail. More specifically, we are concerned that schools will try to obtain filtering and monitoring software that will not only prevent children and young people from accessing important information, e.g., on sexual health and gender issues, or religious or political matters – including contemporary contentious issues including terrorism and jihadism; but that will also automatically detect and single out individual students deemed by the software – that is, by an algorithm – to being in some sense deviant.

Main issues
5.              The three most important general issues identified in the 2006 FIPR report were:
I.               Children have human rights – including a right to privacy and to seek, receive and impart information without undue interference;
II.             There are serious dangers in conflating “safeguarding” children with “promoting the welfare” of children that can lead to breaches of their rights; and
III.           There are serious dangers inherent in trying to predict and prevent “bad” outcomes for children, especially if this is done on the basis of profiling, data mining and what is now called “algorithmic decision-making”, that can lead to further breaches of their rights, without effective remedies.

Brief elaborations on the main issues

I.              Children have human rights – including a right to privacy and to seek, receive and impart information without undue interferences;

6.              The UN Convention on the Rights of the Child (CRC) was adopted as long ago as 1989 and has been in force since 1990; the UK signed up to it in the same year and ratified the convention in December 1991. Although “the child, by reason of his physical and mental immaturity, needs special safeguards and care”, this should not lead to undue interference with its privacy or other rights and freedoms: see in particular Article 13 – 17 CRC. Measures that intrude on a child’s rights and freedoms must serve a legitimate aim and must be necessary and proportionate to the achievement of that aim.

7.              The assessments of “legitimate aim”, “necessity” and “proportionality” will vary depending on the nature of the aim, the intrusiveness of the interference – and in relation to children (defined in both the CRC and the Draft Statutory Guidelines as anyone under the age of 18), the level of maturity of the child. Interference that may be legitimate and proportionate if applied to a 5 or 10-year old may be not justified and disproportionate if applied to a 16 or 17-year old.

8.              In this, it should be borne in mind that ubiquitous monitoring of a person’s online activities constitutes a very serious interference with that person’s private life and with his or her data protection rights. As the Court of Justice of the EU (CJEU) has put in an important recent judgment:

legislation permitting the public authorities to have access on a generalised basis to the content of electronic communications must be regarded as compromising the essence of the fundamental right to respect for private life, as guaranteed by Article 7 of the [EU Charter of Fundamental Rights].”
(CJEU judgment in Schrems, C‑362/14, para. 94, with reference to Digital Rights Ireland and Others, C‑293/12 and C‑594/12, para. 39; emphasis added).
The reference to such actions “compromising the essence of the fundamenal right” means that such “generalised” access or surveillance can never be justified: monitoring of a person’s electronic communications – which include online browsing – must always be targeted, on the basis of objective criteria indicating a need for such intrusion (cf. para. 91 of the Schrems judgment).

9.              The above dictum is as true with regard to children as it is in relation to adults. It may well be easier to justify the monitoring of a child’s online activities than the surveillance of an adult; the threshold may be lower – but a threshold there must be. For the state to authorise – nay, to demand[1] – the ubiquitous suspicionless, untargeted monitoring (possibly by automated means) of all the online activities of a child in all educational environments compromises the essence of the child’s rights to private life, (online) association and freedom of expression (which includes the right to seek, receive and impart information and ideas without interference by public authority and regardless of frontiers).

II.            There are serious dangers in conflating “safeguarding” children with “promoting the welfare” of children that can lead to breaches of their rights

10.           The 2006 FIPR report stressed that:

It is important to be clear about the distinction between the government’s broad policy goal of ‘safeguarding children’ and the narrower focus of ‘child protection’, since they pose different data protection issues.
‘Safeguarding’ covers all the problems of childhood and is defined by the government as:
The process of protecting children from abuse or neglect, preventing impairment of their health and development, and ensuring that they are growing up in circumstances consistent with the provision of safe and effective care which is undertaken so as to enable children to have optimum life chances and enter adulthood successfully.
This comes from a standard DfES reference, which was the subject of extensive consultation, and which also gives the following definition for child protection:
“The process of protecting individual children identified as either suffering, or at risk of suffering, significant harm as a result of abuse or neglect”

11.           The report accepted that intrusive measures such as broad data sharing are not just justified but essential for child protection in this narrow sense – but argued strongly that the same does not hold true when it comes to “preventing problems from developing” in a much looser sense. Exactly the same holds true when it comes to intrusive, ubiquitous monitoring of young peoples’ online activities. If there are objective indications that a child or young person is at real risk of being drawn into crime, violence or “jihadism”, surveillance and interventions by educators, social workers and in serious cases the police may be justified. But that does not mean that children and young people should, without prior suspicion, be ubiquitously monitored for signs that they might be tempted into “extremism” or other bad behaviour (or even thoughts).

12.           We discern the same erroneous conflation of issues in the Draft Statutory Guidelines. Specifically, it says that:

Protecting children from the risk of radicalisation should be seen as part of schools’ wider safeguarding duties, and is similar in nature to protecting children from other forms of harm and abuse. During the process of radicalisation it is possible to intervene to prevent vulnerable people being radicalised. (para. 51)

13.           In fact, there is a fundamental difference between noting signs of actual (physical or mental) harm or abuse in a child and trying to identify whether a child or young adult is “at risk” of becoming “radicalised”, especially when “radicalisation” and “extremism” are defined as broadly as this:

Radicalism refers to the process by which a person comes to support terrorism and [other?] forms of extremism. (para. 52, emphasis added)
Extremism is vocal or active opposition to fundamental British values, including democracy, the rule of law, individual liberty and mutual respect and tolerance of different faiths and beliefs. We also include in our definition of extremism calls for the death of members of our armed forces, whether in this country or overseas. (footnote 13, emphasis added)
14.           Whatever exactly may be meant by “vocal or [note the ‘or’!] active opposition to fundamental British values” – it is clear that what is addressed here goes well beyond what is criminal under the law.
15.           In its seminal Handyside judgment, the European Court said, as long ago as 1976:

Freedom of expression constitutes one of the essential foundations of [a democratic] society, one of the basic conditions for its progress and for the development of every man. Subject to [the specified exceptions], it is applicable not only to "information" or "ideas" that are favourably received or regarded as inoffensive or as a matter of indifference, but also to those that offend, shock or disturb the State or any sector of the population. Such are the demands of that pluralism, tolerance and broadmindedness without which there is no "democratic society". This means, amongst other things, that every "formality", "condition", "restriction" or "penalty" imposed in this sphere must be proportionate to the legitimate aim pursued. (para. 49, emphasis added)
16.           As already noted, the imposition of certain interferences with freedom of expression and freedom to seek, receive and impart information may be justified in relation to young children that are not justified in relation to adults. However, we believe this cannot be stretched to the extent that children – all children under the age of 18 – must be prevented from looking for or discussing or even indulging in the dissemination of anything that “opposes fundamental British values”. Schools and other educational establishments should be places of learning, discovery and exploration – including learning about, discovering and even exploring information and ideas that “offend, shock or disturb” the British State or British mainstream society.

17.           In our view, the preventative ubiquitous monitoring of young peoples’ online behaviour, without clear prior evidence of serious dangers to them, to spot signs, not of criminal matters but of matters that are otherwise societally frowned upon, is in fundamental breach of their human rights.

III.          There are serious dangers inherent in trying to predict and prevent “bad” outcomes for children, especially if this is done on the basis of profiling, data mining and what is now called “algorithmic decision-making”, that can lead to further breaches of their rights, without effective remedies.

18.           Using data mining/profiling software tools to seek out from large datasets (like the browsing records of all students at an establishment) “possible” or “probable” targets is fraught with danger – in particular if the aim is to find rare targets, when such tools will inevitably lead to many “false positives” or “false negatives” (or most likely both). We have both written about this in many publications. A quite detailed write-up of the issues is contained in a report one of us wrote with a French colleague in 2015.[2] Here, it may suffice to note two clear and present dangers:

(i)            Profiling tools are extremely likely to lead to “discrimination by computer”. The use of software to try to identify students supposedly “at risk” of becoming “radicalised” (in the sense of deemed to be drawn to ideas that are “opposed to fundamental British values”) will undoubtedly lead to the singling out of many individual students who have committed no criminal offences and most probably would not go on to commit criminal offences – but who will forever be stigmatised by an official label of being “anti-British” or “extreme”.

(ii)          It is becoming increasingly impossible to challenge the outcomes of such “algorithmic decision-making”, even if applied to more-or-less verifiable matters (such as whether a person actually went to a terrorist training camp).[3] When the label is so opaque as the definition of “extremism” used in the Draft Statutory Guidelines, it becomes even worse. How can a child prove that she was only “exploring” notions and ideas that run counter to mainstream ideologies, rather than “supporting” them?

19.           We fear that the Draft Statutory Guidelines are a fundamentally flawed attempt to counter bad ideas, or even to prevent children from being attracted to them. The measures proposed, the ubiquitous surveillance that is implied, will on the contrary alienate those already disenchanted with our society, and drive some of them into bad actions, let alone bad thoughts.

20.           They are also wide open to challenge in the European courts.

[1] As the Draft Statutory Guidance makes clear, schools and colleges must comply with them (unless [unspecified] exceptional circumstances arise) (p. 3)
[2] Marie Georges & Douwe Korff, Passenger Name Records, data mining & data protection: the need for strong safeguards, report prepared for the Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (T-PD) of the Council of Europe, available at:
See in particular section I.iii (p. 22ff) on “The dangers inherent in data mining and profiling”.
[3] See the report mentioned in the previous footnote, in particular the sub-section on “The increasing unchallengeability of profiles - and of decisions based on profiles”, p. 28ff (with references).

Thursday, November 06, 2014

Protecting privacy in the GOV.UK Verify scheme

For the last two years I've been working with colleagues in the Cabinet Office's Privacy and Consumer Advisory Group to develop privacy principles for the government's online identity assurance programme. This is now close to launch, and got some front-page attention in The Times on Monday. Here is the just-published letter we sent to the newspaper with more details. The Government Digital Service has also published a response.


Today’s Times (4/11/2014) front-page story contains an error: “Virtual ID for everyone” should read “Virtual IDs for everyone”. It is a vital part of the scheme that we may all have plural identities.

For the last two years, we, as members of the Privacy and Consumer Advisory Group, have been working with the dedicated Cabinet Office team to define nine Identity Assurance Principles that, if implemented across government, would protect against the Verify scheme becoming a shadow identity card system.

Control by the citizen is at the heart of these principles. You choose (and can discard) your own virtual identities. They are not imposed on you by the state. You can read more on the principles at

Obviously a citizen using a public service (online or otherwise) needs to be identifiable to that service to some degree. But this does not mean a service provider should have access to any unnecessary information about the citizen. That is what the Verify scheme was conceived, laudably, to achieve. 

Our Identity Assurance Principles are intended to ensure it does achieve that in practice. We have recommended that all existing powers of data access or disclosure should be re-approved by Parliament as these powers have themselves been transformed by modern technology. We also call for effective forms of redress, and for an effective regulatory and judicial oversight over the use of such powers.

Public support for virtual identity will depend on trust and understanding. Our Nine Principles are designed to build that, but will only do so if members of the public know what they are, and that the authorities will obey them. That is why we have asked that, after the testing phase, the principles are written into law to ensure their general application.

Yours faithfully,
Guy Herbert, General Secretary, NO2ID
Louise Bennett, BCS Policy Board Member
Dave Birch, Consult Hyperion
Ian Brown, Professor of Information Security and Privacy, Oxford Internet Institute
Emma Carr, Director, Big Brother Watch
Dr Gus Hosein, Director, Privacy International
Dr Chris Pounder, Amberhawk
Dr Edgar Whitley, London School of Economics

Monday, September 01, 2014

A tour of NATO's cyber HQ

2014-03-09 - Perevalne military base - 0116.JPG
Little Green Men by Anton Holoborodko (Антон Голобородько), CC BY-SA 3.0

NATO is in the news today, declaring that a cyber-attack on any of the military alliance's members could lead to an joint response under Article V of the North Atlantic Treaty. Russia's invasion of Ukraine — reluctant as most NATO members are to label it as such — means this is not just a remote possibility.

I heard more about NATO's plans over the summer, when they were kind enough to invite me on a tour of their headquarters (outside Brussels), cyber-defence facilities (in Mons), and the Cooperative Cyber Defence Centre of Excellence in Tallinn (although unfortunately I couldn't make it to the latter). These plans will be finalised at the Wales Summit of NATO leaders this Thursday/Friday in Newport and Cardiff (whose poor residents have to put up with a 10 mile security fence).

Background and current strategy

NATO's mandate is cyber defence - it will not be carrying out "active defence" (e.g. striking back against hostile systems), nor coordinating member states' cybersecurity (which apparently remains a very sensitive national prerogative).

The first, basic, NATO cyber strategy came in 2008, following attacks on Estonian and Georgian systems by "patriotic hackers" that were strongly suspected to be coordinated by the Russian government. A more developed strategy was agreed in 2011, with an action plan mainly focused on securing NATO's own networks and systems, which link the member states' deployed facilities.

These systems have recently been upgraded in a 58m€ project to provide centralised protection to classified NATO networks across 51 sites, with three to complete. This gives commanders situational awareness and analytical tools, and constantly updates network sensors. 

NATO has established a Cyber Defence Management Board to coordinate policy and military activity. It has defined minimum requirements for cyber protection for national networks that NATO depends on, and national cyber capability targets (e.g. national strategy, CERT, supply chain regulations) for 2019. This has been a major driver of investment and uniformity. The Cyber Defence Committee has the lead political role in policy governance, acting as a link between the North Atlantic Council and all other NATO committees.

NATO has a good EU partnership at staff level, and holds reciprocal briefings with the Organisation for Security and Cooperation in Europe, and Council of Europe. There is an "intense tempo" of cooperation with five Western European non-NATO partners (Sweden, Ireland, Austria, Switzerland and Finland), as well as Australia and New Zealand. Following vetting for information sharing mirrored by the intelligence domain, this allows these countries to participate in cyber coalition exercises. NATO can blend cyber intelligence with classical intelligence to do much better attribution of attacks.

The new strategy

NATO's 2014 enhanced policy brings new elements:
  • A link between cyber and collective defence. Art. V applies on a political case-by-case basis; there are no general criteria for its application.
  • A focused exploration of the threat landscape.
  • A framework for assistance to allies in cyber crises and in peacetime — the key element is information sharing, alongside rapid reaction teams, NATO as a clearing house for bilateral assistance and the civil emergency planning process, then more generally situational awareness, early warning, exchange of expertise, interoperability, and impact analysis (made possible by increased national investment reducing concerns over free riding).
  • An explicit statement that international law is applicable in the cyber domain.
  • An increased emphasis on training, education and exercises, with “coherent” use of NATO schools.
  • NATO-industry Cyber Partnerships — to be implemented post-Wales, but there are already links with industry, mainly on procurement. NATO wants a different level of information sharing, with a structured platform (building on national sharing) and bigger regular meetings. This will be voluntary, but as inclusive as possible.

The Alliance already has three “smart defence” collaborative development projects between members:
  1. Canada, Netherlands, Germany, Romania and Finland are developing smart sensors, analytical tools, and an information sharing platform.
  2. A Malware Information Sharing Platform, developed at Mons, and offered to all member states. 50% of members are already participating, and this will become NATO-wide.
  3. Portugal has launched a training and education initiative, and wants to use the NATO school to become a major hub. This will be an element in a federated network, and make training more uniform, cheaper and more effective.
Estonia has offered their cyber range to NATO — training, education, and exercises are all increasing.


These all seem sensible measures. I was surprised at how determined many of the NATO members seem to be to preserve their own sovereignty even within the Alliance (although they do need to protect themselves against Russian spies). It is astonishing that (according to the New York Times) the US, UK and Germany will not share information about their offensive cyber capabilities even with their closest allies — leaving NATO officials to scour media reports of Edward Snowden's revelations. (I hope that my expert witness statements in Big Brother Watch v UK and Privacy International v GCHQ were helpful :)

NATO suffered a substantial Distributed Denial of Service attack for the first time on 15-16 March 2014, the night before the Crimean "referendum" on joining Russia, bringing down the NATO website for 12 hours. Successful attacks on public-facing websites have no impact on NATO readiness, but are embarrassing. The Alliance was previously focused on espionage attempts against their systems. 

The enhanced strategy clearly needs to be implemented quickly, before Putin's unconventional warfare tactics and Little Green Men start making higher profile "virtual" appearances in Ukrainian and NATO member systems.

Sunday, June 08, 2014

Don't spy on us!

Very inspiring today to see over 500 people turn up for the Don't Spy On Us coalition's day of action, on the first anniversary of Edward Snowden's leaks. There were some great speeches - amongst others from Bruce Schneier, Jimmy Wales, Duncan Campbell and Shami Chakrabarti. 

Here are my notes for my own panel remarks:

Maintaining privacy online is an ongoing struggle. We need changes in both technology and law.

Encrypting everything is a good starting point, and will raise the cost of mass surveillance. But it is not a panacea - it is not nearly easy enough yet for the majority of users, and anyway many organisations hold user data without sufficient organisational and technical controls to adequately protect it. 

NSA’s TURBINE programme is designed to allow control of millions of compromised systems. Even at much lower levels of sophistication, we see millions of machines in botnets. Where the Five Eyes states lead, other nations and then criminals will follow. We need much better tools for producing and verifying trustworthy systems.

Technologists can also help by developing useable open source security tools for non-geeks (GPGTools is a good example). But it's also important to work on standards (like the IETF) and find other ways to get mainstream providers to beef up security (like Google’s TLS monitoring).

One important benefit of the Snowden disclosures has been to force legal discussion of foreign intelligence collection into the open. This was previously an almost undiscussed area of international law. It's important to push stronger standards (like the Necessary & Proportionate principles) and even more importantly, to enforce them - through courts, the UN, international political processes like EU-US treaty negotiations - and every other available forum (such as the Council of Europe, WTO, TTIP…) 

This can be a boring unglamorous slog, and eats up campaign groups’ already scarce resources. But the anti-privacy voices in those venues have to be consistently countered. 

The most important way to protect online privacy is political. It takes thousands of loud voices to persuade politicians over the soothing noises of the security agencies (and the tabloid newspapers that think you can never have enough surveillance). We need many more Julian Hupperts, Claude Moraes and David Davises, in national and European parliaments, to get the long-term legal reforms required. So I hope everyone in this room is already a member of at least one campaign group like ORG or Liberty - and will get more involved in activism on these issues in future. 

Tuesday, March 04, 2014

Finally, some high-level UK debate on Internet surveillance

You wait nine months for some UK political debate on the mass Internet surveillance by the National Security Agency and GCHQ revealed by Edward Snowden, then two speeches come along at once...

This morning I went to listen to Nick Clegg, the Liberal Democrat leader and deputy prime minister, give his first major speech on the issue (there is a summary in the Guardian). It was thoughtful, and went into much more depth than is typical for top-level political debate on these matters.

Having given up waiting for their coalition partners, the Lib Dems are proposing some immediate changes: reform of the Intelligence and Security Committee, which should be chaired by an opposition Member of Parliament and hold its meetings in public whenever possible; allowing appeals from the Investigatory Powers Tribunal to the English courts; and publishing an annual government transparency report that gives much greater detail about state access to Internet communications and "metadata".

The deputy prime minister talked at length about the controversial "bulk access" to large amounts of Internet traffic that GCHQ has under the Regulation of Investigatory Powers Act. Unlike most other politicians, and certainly unlike former GCHQ directors I have heard speak on the subject, he argued that such large-scale access is not automatically acceptable so long as there are strict rules within NSA/GCHQ on access to the "collected" data.

Collection itself is intrusive (as the European Court of Human Rights has long recognised, in cases such as Leander v Sweden and Amann v Switzerland), and should only happen when necessary and proportionate. Indeed, as President Obama's review panel said:

"Although we might be safer if the government had ready access to a massive storehouse of information about every detail of our lives, the impact of such a program on the quality of life and on individual freedom would simply be too great. And this is especially true in light of the alternative measures available to the government... We recommend that the US Government should examine the feasibility of creating software that would allow the National Security Agency and other intelligence agencies more easily to conduct targeted information acquisition rather than bulk-data collection."
Meanwhile yesterday, shadow Home Secretary Yvette Cooper gave a shorter speech to Demos. She acknowledged the deficiencies of the existing legal regime, and that the Intelligence and Security Committee should be chaired by an opposition MP to give it more credible independence from the government, and given permanent technological expertise. She also said that the Communications Data Bill previously proposed by the government was "far too widely drawn, giving the Home Secretary unprecedented future powers, and with too few checks and balances, and has rightly been stopped."

There seems to be a developing consensus between the two parties. Yvette Cooper has called for much more public debate about Internet surveillance, echoing Nick Clegg's concern about a loss of public confidence in the intelligence agencies. Both want stronger oversight by converting the existing interception and intelligence commissioners - retired judges  - whose work is largely unknown by the public, into a higher-profile Inspector General. And both recognise that the Regulation of Investigatory Powers Act now needs changing, in areas such as stronger safeguards for "metadata", and looking again at the broad powers given for GCHQ surveillance of "external" communications that start and/or end outside the British Isles (i.e. most Internet communications).

The deputy PM has asked the MoD's external think-tank, the Royal United Services Institute, to convene an Obama-style review panel to report back on these issues after the next election.  By then, as Clegg said, there will be irresistible pressure for Parliament to update the UK legal framework to better reflect the realities of today's Internet - and perhaps a Labour-Lib Dem coalition that would make this happen. Hopefully those Conservative MPs such as David Davis, who have played a strong role in the public debate so far, will also be able to persuade their colleagues in government of the necessity of reform.

Wednesday, January 09, 2013

Could a cyber-attack "fatally compromise" the UK military?

The House of Commons Defence Committee has published a report on Defence and Cyber-Security, which concludes:
The evidence we received leaves us concerned that with the Armed Forces now so dependent on information and communications technology, should such systems suffer a sustained cyber attack, their ability to operate could be fatally compromised... The cyber threat is, like some other emerging threats, one which has the capacity to evolve with almost unimaginable speed and with serious consequences for the nation's security. The Government needs to put in place - as it has not yet done - mechanisms, people, education, skills, thinking and policies which take into account both the opportunities and the vulnerabilities which cyber presents. It is time the Government approached this subject with vigour.
I think this conclusion may be overstated. In a time of serious budgetary cutbacks, the government has committed serious new money — £650m — to cybersecurity activities (although this may have been concentrated too heavily at GCHQ). A small amount of that is going towards Academic Centres of Excellence in Cybersecurity Research, one of which is at Oxford. The report fails to draw an adequate distinction between risks to defence systems and broader national security. And while information security is not developing nearly quickly enough in critical national infrastructure, we are not yet at the point at which likely adversaries would have the motivation and capability to cause serious damage to property or loss of life via these vulnerabilities.

The conclusions Peter Sommer and I reached last year for the OECD in our report on global systemic cybersecurity risk still hold: this is a long-term planning concern for government, not a short-term panic. I've made these points in interviews this afternoon for the World Service and BBC Scotland.

Thursday, September 20, 2012

Confusion reigns over UK Internet freedom

The UK's Director of Public Prosecutions this morning published an extremely sensible statement after deciding not to prosecute Daniel Thomas, the author of a homophobic tweet about Olympic divers Tom Daley and Peter Waterfield:
“This was, in essence, a one-off offensive Twitter message, intended for family and friends, which made its way into the public domain. It was not intended to reach Mr Daley or Mr Waterfield, it was not part of a campaign, it was not intended to incite others and Mr Thomas removed it reasonably swiftly and has expressed remorse. Against that background, the Chief Crown Prosecutor for Wales, Jim Brisbane, has concluded that on a full analysis of the context and circumstances in which this single message was sent, it was not so grossly offensive that criminal charges need to be brought."
This was a positive application of the Human Rights Act and European human rights jurisprudence to a tweet that qualified for the Communications Act 2003 offence of a "grossly offensive" communication sent using a public electronic network. This offence clearly needs reviewing, as the DPP suggests:
"Social media is a new and emerging phenomenon raising difficult issues of principle, which have to be confronted not only by prosecutors but also by others including the police, the courts and service providers. The fact that offensive remarks may not warrant a full criminal prosecution does not necessarily mean that no action should be taken. In my view, the time has come for an informed debate about the boundaries of free speech in an age of social media."
Douwe Korff and I suggested a possible approach in a report for the Council of Europe's Commissioner for Human Rights last year.

The message does not seem to have reached the Greater Manchester police, who have this afternoon arrested a man over a Facebook page praising the alleged murderer of two officers. While repellent, is this really their highest priority right now? There are concerns that the police press conference (as well as a statement by the prime minister) may already have prejudiced the forthcoming murder trial.