Shakespeare's great villain Iago spoke of reputation as "an idle and most false imposition: oft got without merit, and lost without deserving". But at the same time, "the immediate jewel of [our] souls". This conflict between reputation as something that is hazardously imperfect, yet of deep personal significance, is captured rather exquisitely by the Google Spain case. This startling data protection ruling from Europe's highest court has caught the entire internet off-guard. This case is a pushback. It does not just blithely fold to practical concerns, but it attempts to give real recognition to the interests of individuals. For this, it should be lauded. Such bravery is rare The Court of Justice of the EU (CJEU) found that Spanish citizen Mario Costeja-González had a right to ask Google to remove the links to two 16-year-old newspaper articles about the foreclosure of his home due to unpaid debts (which he subsequently paid). The original articles are still sitting in the newspaper's archives. Pre-web, they would have stayed there and gathered dust. Post-web, the articles keep naggingly resurfacing whenever Mr Costeja is googled. What the CJEU has just done is to affirm a right to make Google "forget" the existence of those articles. Mr Costeja is just an ordinary guy; no public official; no rock star. There is no reasonable public interest in his rather unremarkable life. But the continued indexing of these old articles undoubtedly affects his reputation. It is not improbable that, through their mere existence on Google, a bank could deny a loan or a landlord could refuse a lease. So, in general terms, Mr Costeja has some grounds for concern. The problem with this decision is what lawyers call "the floodgates". Courts have got in the practice of avoiding floodgates. Instead, to the bane of many and the benefit of a few, they tend to issue narrow, case-specific rulings that provide little certainty. This decision, by contrast, casts itself boldly and widely on the open sea. It privileges an individual's interest in information about themselves above all other interests (with the major exception being the public interest in the lives of public figures). The immediate reaction to the decision has been, on the whole, negative. At best, it is reckoned to be hopelessly unworkable. At worst, critics pan it as censorship. While there is much to deplore, I would argue that there are some important things we can gain from this decision before casting it roughly aside. The Law Needs an Upgrade The first thing to say is that the case is a logical product of applying current European data protection law. The law can be read to support a person's legitimate interests in almost any information about themselves. After sitting and stewing on this case for almost a year, the Grand Chamber has brought this uncompromisingly to the fore. And what it says has implications well beyond Google. The fact that there is such an enormous chasm between what the law says and what many of us perceive regarding "internet freedom" reflects the desperately anachronistic state of data protection law, which was born pre-web, when computers numbered in the few hundreds of thousands worldwide, and the future impact of Internet-related technologies was drastically underestimated. The law needs reform and public debate. This case accelerates a debate that, frankly, we should have had a long time ago. It makes us sit down two uncomfortable room-mates -- impotent, less-than-house-trained Privacy, and its rambunctious, frequently offensive friend Freedom -- and sort out some house rules that are fair and workable. It also makes us stare down the very different notions of freedom of expression that prevail around the world. Beyond forcing necessary debate and reform, there are two reasons why we might find some utility in the decision. The first is that it shows an attempt at real respect for digital rights, even if not fully thought-out. The second is that it invites creativity about rapid technical solutions that could be implemented to address the case's core concerns and that could be of benefit to us all. (So rapid, in fact, that we've gone ahead and started to build some. But more on that later.) Taking Individual Rights Seriously At one level, this case is a triumph for the individual. We are living in a time when sensitivity to digital rights couldn't be higher, but those rights are routinely and easily trampled by the might of government and big business. This case is a pushback. It does not just blithely fold to practical concerns, but it attempts to give real recognition to the interests of individuals. For this, it should be lauded. Such bravery is rare. It is particularly welcome that the court doesn't engage in linguistic deception for administrative convenience, such as denying that Google processes data and conducts operations in Europe. The US FCC could take a leaf out of this book and stop dealing with ISPs under the falsehood that they are not telecommunications providers. We need to talk straight to give justice credibility and teeth. We also need to be even-handed. Intermediaries have long been regarded as key recruits when it comes to protecting rights online. But those rights have on the whole been corporate economic rights (i.e. content industry copyright interests), rather than human rights. If intermediaries can exercise some (albeit limited, due to Google's craft) control for copyright purposes, then why not for privacy and other individual rights too? As citizens, we must be careful not to be our own worst enemies, or to play unwittingly to an invisible hand. For as long as individuals are the underdogs of the net -- and suffer no delusions, we are certainly the underdogs -- we need to band together. What this case should ideally provoke is an unflinching reflection on our contemporary digital reality of walled gardens, commercial truth engines, and silent stewards of censorship. The CJEU is painfully aware of the impact of search engines (and 'The' search engine, in particular). But we as a society should think about the hard sociopolitical problems that they pose. Search engines are catalogues, or maps, of human knowledge, sentiments, joys, sorrows, and venom. Silently, with economic drivers and unofficial sanction, they shape our lives and our interactions. The fact of the matter here is that if there is anyone that is up to the challenge of respecting this ruling creatively, Google is. But if early indications are anything to go by, there's a danger that we'll unwittingly save Google from having to do so, either through rejecting the decision in practical or legal terms; through allowing Google to retreat "within the framework of their responsibilities, powers and capabilities" (which could have other unwanted effects and unchecked power, by contrast with transparent legal mechanisms); or through working the "right to be forgotten" out of law through the revised Data Protection Regulation, all under the appealing but ultimately misguided banner of preventing censorship. Let's, for once, be one step ahead. What we certainly don't want is for those with might and means to be able to curate their identities online. Regrettably, this already happens. And it is one reason for a more creative response than simply letting a bold ruling slide. Adopting Creative Tech-Enabled Solutions Happily, there are great opportunities available for creative, rapid, and adaptable technical solutions. The remedy that the court discusses -- erasure of lawful material online -- is undesirable and problematic in all sorts of ways for freedom of speech and press. But it should be noted that this is the law's backstop. What Google forgot, and what we must remind ourselves, is that (at least for information that is not sensitive data) you only get to the ball-game of erasure and forgetting if you fail to comply with data-processing principles. And those principles include mechanisms for rectification of inaccurate, out of date, or incomplete data. (In a new, better-modelled data protection law sensitive to the breadth of available data, those three categories might be conditional on them being somehow harmful or prejudicial, but for the moment we take the law as it stands.) The idea of data protection law is that citizens should have some avenues for redress against information that affects them. The problem with the sheer breadth of the law's scope is that it is practically unenforced, and avenues of redress are a Dickensian debacle. To take the debate beyond the despair that is presently reigning, what if we took the law seriously, in a way that is transparent and satisfies individual interests, while also avoiding overly curating, curtailing, and re-writing our imperfect map of society? An all-round better solution than "forgetting", "erasure", or "take-down", with all of the attendant issues with free speech and the rights of other internet users, is a "right to reply" within the notion of "rectification". This would be a tech-enabled solution: a capacity to associate metadata, perhaps in the form of another link, to any data that is inaccurate, out of date, or incomplete, so that the individual concerned can tell the "other side" of the story. We have the technology to implement such solutions right now. In fact, we've done a mock-up envisaging how such an approach could be implemented. Search results could be tagged to indicate that a reply has been lodged, much as we see with sponsored content on social media platforms. Something like this, for example: A mock-up of a Google search result tagged by someone with the 'right to reply' A register of replies could then be established to interface with any search engine, allowing the engine to query whether a right of reply or erasure has been exercised. Clicking through to the register would detail information by the individual who is exercising their data protection rights about why and when information was rectified or erased. A mock-up of what a 'right to reply' register might look like This would probably be a good start to actually addressing the thousands of legitimate interests that are lined up outside data protection authorities trying to enforce the law. It is a sad reality of the internet that it is capable of multiplying falsehood and exposing humanity's darkest cruelty. Most of us choose to accept and ignore this imperfection, reconciling ourselves, like Iago, that it is "idle and most false". On the other hand, for those who want to engage and reply, there ought to be some practical options available. This is part of a broader post-regulatory ideal that our best solutions lie in alternatives to expensive, cumbersome, post-hoc legal procedures and rights-clearance mechanisms. Our best solutions use technology to assist law in allowing us to achieve our interests, and in not allowing those jewels to be traded lightly.
Shakespeare's great villain Iago spoke of reputation as "an idle and most false imposition: oft got without merit, and lost without deserving". But at the same time, "the immediate jewel of [our] souls".
This conflict between reputation as something that is hazardously imperfect, yet of deep personal significance, is captured rather exquisitely by the Google Spain case. This startling data protection ruling from Europe's highest court has caught the entire internet off-guard.
This case is a pushback. It does not just blithely fold to practical concerns, but it attempts to give real recognition to the interests of individuals. For this, it should be lauded. Such bravery is rare
The Court of Justice of the EU (CJEU) found that Spanish citizen Mario Costeja-González had a right to ask Google to remove the links to two 16-year-old newspaper articles about the foreclosure of his home due to unpaid debts (which he subsequently paid).
The original articles are still sitting in the newspaper's archives. Pre-web, they would have stayed there and gathered dust. Post-web, the articles keep naggingly resurfacing whenever Mr Costeja is googled. What the CJEU has just done is to affirm a right to make Google "forget" the existence of those articles.
Mr Costeja is just an ordinary guy; no public official; no rock star. There is no reasonable public interest in his rather unremarkable life. But the continued indexing of these old articles undoubtedly affects his reputation. It is not improbable that, through their mere existence on Google, a bank could deny a loan or a landlord could refuse a lease. So, in general terms, Mr Costeja has some grounds for concern.
The problem with this decision is what lawyers call "the floodgates".
Courts have got in the practice of avoiding floodgates. Instead, to the bane of many and the benefit of a few, they tend to issue narrow, case-specific rulings that provide little certainty. This decision, by contrast, casts itself boldly and widely on the open sea. It privileges an individual's interest in information about themselves above all other interests (with the major exception being the public interest in the lives of public figures).
The immediate reaction to the decision has been, on the whole, negative. At best, it is reckoned to be hopelessly unworkable. At worst, critics pan it as censorship. While there is much to deplore, I would argue that there are some important things we can gain from this decision before casting it roughly aside.
The Law Needs an Upgrade
The first thing to say is that the case is a logical product of applying current European data protection law. The law can be read to support a person's legitimate interests in almost any information about themselves. After sitting and stewing on this case for almost a year, the Grand Chamber has brought this uncompromisingly to the fore. And what it says has implications well beyond Google.
The fact that there is such an enormous chasm between what the law says and what many of us perceive regarding "internet freedom" reflects the desperately anachronistic state of data protection law, which was born pre-web, when computers numbered in the few hundreds of thousands worldwide, and the future impact of Internet-related technologies was drastically underestimated. The law needs reform and public debate.
This case accelerates a debate that, frankly, we should have had a long time ago. It makes us sit down two uncomfortable room-mates -- impotent, less-than-house-trained Privacy, and its rambunctious, frequently offensive friend Freedom -- and sort out some house rules that are fair and workable. It also makes us stare down the very different notions of freedom of expression that prevail around the world.
Beyond forcing necessary debate and reform, there are two reasons why we might find some utility in the decision. The first is that it shows an attempt at real respect for digital rights, even if not fully thought-out. The second is that it invites creativity about rapid technical solutions that could be implemented to address the case's core concerns and that could be of benefit to us all. (So rapid, in fact, that we've gone ahead and started to build some. But more on that later.)
Taking Individual Rights Seriously
At one level, this case is a triumph for the individual. We are living in a time when sensitivity to digital rights couldn't be higher, but those rights are routinely and easily trampled by the might of government and big business.
This case is a pushback. It does not just blithely fold to practical concerns, but it attempts to give real recognition to the interests of individuals. For this, it should be lauded. Such bravery is rare. It is particularly welcome that the court doesn't engage in linguistic deception for administrative convenience, such as denying that Google processes data and conducts operations in Europe. The US FCC could take a leaf out of this book and stop dealing with ISPs under the falsehood that they are not telecommunications providers. We need to talk straight to give justice credibility and teeth.
We also need to be even-handed. Intermediaries have long been regarded as key recruits when it comes to protecting rights online. But those rights have on the whole been corporate economic rights (i.e. content industry copyright interests), rather than human rights. If intermediaries can exercise some (albeit limited, due to Google's craft) control for copyright purposes, then why not for privacy and other individual rights too?
As citizens, we must be careful not to be our own worst enemies, or to play unwittingly to an invisible hand. For as long as individuals are the underdogs of the net -- and suffer no delusions, we are certainly the underdogs -- we need to band together.
What this case should ideally provoke is an unflinching reflection on our contemporary digital reality of walled gardens, commercial truth engines, and silent stewards of censorship. The CJEU is painfully aware of the impact of search engines (and 'The' search engine, in particular). But we as a society should think about the hard sociopolitical problems that they pose. Search engines are catalogues, or maps, of human knowledge, sentiments, joys, sorrows, and venom. Silently, with economic drivers and unofficial sanction, they shape our lives and our interactions.
The fact of the matter here is that if there is anyone that is up to the challenge of respecting this ruling creatively, Google is. But if early indications are anything to go by, there's a danger that we'll unwittingly save Google from having to do so, either through rejecting the decision in practical or legal terms; through allowing Google to retreat "within the framework of their responsibilities, powers and capabilities" (which could have other unwanted effects and unchecked power, by contrast with transparent legal mechanisms); or through working the "right to be forgotten" out of law through the revised Data Protection Regulation, all under the appealing but ultimately misguided banner of preventing censorship.
Let's, for once, be one step ahead. What we certainly don't want is for those with might and means to be able to curate their identities online. Regrettably, this already happens. And it is one reason for a more creative response than simply letting a bold ruling slide.
Adopting Creative Tech-Enabled Solutions
Happily, there are great opportunities available for creative, rapid, and adaptable technical solutions. The remedy that the court discusses -- erasure of lawful material online -- is undesirable and problematic in all sorts of ways for freedom of speech and press. But it should be noted that this is the law's backstop.
What Google forgot, and what we must remind ourselves, is that (at least for information that is not sensitive data) you only get to the ball-game of erasure and forgetting if you fail to comply with data-processing principles. And those principles include mechanisms for rectification of inaccurate, out of date, or incomplete data. (In a new, better-modelled data protection law sensitive to the breadth of available data, those three categories might be conditional on them being somehow harmful or prejudicial, but for the moment we take the law as it stands.)
The idea of data protection law is that citizens should have some avenues for redress against information that affects them. The problem with the sheer breadth of the law's scope is that it is practically unenforced, and avenues of redress are a Dickensian debacle. To take the debate beyond the despair that is presently reigning, what if we took the law seriously, in a way that is transparent and satisfies individual interests, while also avoiding overly curating, curtailing, and re-writing our imperfect map of society?
An all-round better solution than "forgetting", "erasure", or "take-down", with all of the attendant issues with free speech and the rights of other internet users, is a "right to reply" within the notion of "rectification". This would be a tech-enabled solution: a capacity to associate metadata, perhaps in the form of another link, to any data that is inaccurate, out of date, or incomplete, so that the individual concerned can tell the "other side" of the story.
We have the technology to implement such solutions right now. In fact, we've done a mock-up envisaging how such an approach could be implemented.
Search results could be tagged to indicate that a reply has been lodged, much as we see with sponsored content on social media platforms. Something like this, for example:
A mock-up of a Google search result tagged by someone with the 'right to reply'
A register of replies could then be established to interface with any search engine, allowing the engine to query whether a right of reply or erasure has been exercised. Clicking through to the register would detail information by the individual who is exercising their data protection rights about why and when information was rectified or erased.
A mock-up of what a 'right to reply' register might look like
This would probably be a good start to actually addressing the thousands of legitimate interests that are lined up outside data protection authorities trying to enforce the law.
It is a sad reality of the internet that it is capable of multiplying falsehood and exposing humanity's darkest cruelty. Most of us choose to accept and ignore this imperfection, reconciling ourselves, like Iago, that it is "idle and most false". On the other hand, for those who want to engage and reply, there ought to be some practical options available.
This is part of a broader post-regulatory ideal that our best solutions lie in alternatives to expensive, cumbersome, post-hoc legal procedures and rights-clearance mechanisms. Our best solutions use technology to assist law in allowing us to achieve our interests, and in not allowing those jewels to be traded lightly.