On Cambridge Analytica, surveillance and democracy
Last month, programmer Christopher Wylie made public the information that his former employer Cambridge Analytica had accessed millions of users’ data in order to provide targeted ads during the 2016 US elections. The first thing that crossed my mind when the news broke was that it’s been almost exactly five years since another whistleblower famously leaked highly confidential documents detailing deep government surveillance programs. One of the key programs revealed in the 2013 Snowden leaks was PRISM, whereby governments mandated that the tech giants Facebook, Google, Apple and Microsoft were obliged to cooperate with data requests from the government for use in investigations. The scandal at the time seemed to be of the government’s power to deem our digital privacy meaningless, but instead it should have been about just how much data these corporations actually held for each and every one of us, and what they are actually using it for. There has been a common saying in recent years, that Google and Facebook know you better than you do. While it is perhaps meant with a degree of irony, it’s a statement that is far truer than it ought to be, and its implication, with the derailing of the democratic process as we have seen recently, is terrifying. Meanwhile, Edward Snowden is still in exile in Russia, Donald Trump is in the White House, and very little, it seems, has been done in the intervening period to tackle the questions raised back then. In fact, if anything, as a society we seem to have totally and willingly forfeited our right to privacy in exchange for convenience. Smartphones and mobile technologies have proliferated to interface just about every possible interaction we have in our everyday lives, at a price which we seem all too happy to pay. So how on earth did we let it get this far? Was this always an inevitability? And what are the ultimate consequences of our present-day surveillance society?
In order to unpack the concept of the surveillance society, we must first understand the concept of surveillance. Originating from the French words sur (above) and veiller (to watch), it translates to ‘watching from above’. This relative vantage point of the watcher over the watched is crucial to its understanding, because while we may not have literal prison guards in watchtowers overlooking our actions in our daily lives, there is always an implied asymmetry of power in all surveillance systems. Those with the ability to observe are afforded an element of control over the lives of the observed simply through the knowledge of their actions. Our modern surveillance systems such as keeping records of employees and citizens can be traced back to the 18th century in Britain and France when bureaucracy became the de facto mode of government administration, and in some way or another surveillance systems have always been a function of all social systems, even predating our modern societies. However, the means in which surveillance is conducted have changed greatly through the ages, and it is at this point that we must bring up the centrality of technology in both its administration and implementation. With every advance made in the technological world, the possibilities for surveillance become ever more sophisticated and pervasive.
Another 18th century idea, the Panopticon, originally thought up by Jeremy Bentham, one of the founding fathers of utilitarianism, has also since become a key topic of inquiry within the field of surveillance studies. The Panopticon was initially conceived as the design of a prison with an annular structure on the peripheries, and an axial tower in the centre. The prisoners were to be housed in cells in the outer building, while a guard would be in the tower, and due to the placement of two windows in the cell, every single cell would be fully visible from the guard’s vantage point. As a prisoner in such a setting, not only is your every action watched by a supervisor, but even if no one is watching, the fact that you know you could be watched is enough for you to alter your own behaviour to that which is deemed acceptable and normal. It was Michel Foucault, in Discipline and Punish, who evoked the idea of the Panopticon once again as a model of understanding the surveillance practices taking place in his contemporary society in 1975. In a reversion of the principles of imprisonment in the dungeon (to confine, to hide and to deprive of light), by the placement of the subject in the light, visibility itself becomes a trap, Foucault writes. He saw in society at large the same fundamental concepts of axial visibility, coupled with lateral invisibility, as a means of not just arranging power for the sake of power itself, but rather to increase economic productivity and strengthen social forces. In fact, when Jeremy Bentham first proposed the idea, it was his brother Samuel who suggested that its use should rather be as an abstract political technology, polyvalent in its application.
When we look at the ways in which surveillance technologies have evolved throughout the 20th century, especially over the last few decades, we see certain patterns emerging. With the proliferation of CCTV cameras on our streets, and digital computing technologies coming first into our homes, and more recently into our pockets wherever we go, our individual anonymity and privacy has all but disappeared, particularly for those living in the post-industrial nations of the world. From the moment we pay for our coffee to when we ride the bus to when we binge-watch documentaries about the far-right before going to bed, our every action is logged, our every footstep documented, and stored in places we have no access to or control over. There is a centralisation of personal information, as well as a decentralisation of the technologies of surveillance, both hallmarks of a surveillance society as originally defined by Gary T. Marx, leading to the social totality functioning as the Panoptic machine. It functions to serve two of the most fundamental needs of a capitalist society — economic productivity, and behavioural discipline with which to maximise that productivity.
And so here is where we return to Cambridge Analytica, and the ongoing debacle surrounding Facebook. At first, when the leaks were announced, there was a false perception among the general public that there had been a massive illegal hack on personal data by Cambridge Analytica in order to aid Donald Trump’s presidential campaign. However, while the subsequent Channel 4 investigation did indeed show increasingly shady and illegal dealings conducted by the company, the irony was that it had gained access to the data of 87 million users through mostly legitimate means. From Facebook’s perspective, the whole affair is particularly embarrassing because this is in fact what their entire business model is based on. Facebook has always been nothing more than an enormous datamining operation disguised as a social network whose primary product is user data which it sells to advertisers in order to create the most efficient and streamlined possible path to consumption. Google, similarly, is a datamining operation dressed up as a search engine. Over the past few years, as these technologies have become even more mobile, and algorithms and AI have become more sophisticated, so has the ability of these corporations to build frighteningly accurate psychological profiles based on our locations, activities, and perceived preferences. The fact that Google even listen to your conversations using the microphones on your phones to provide you with bespoke search results and advertising is public knowledge. What is clear, then, is that through the use of increasingly inescapable means, we are quietly coerced into being the best possible consumers that we can be, while simultaneously being presented with an illusion of ever-expanding choice and freedom.
When the Snowden leaks were published in mid-2013, senior officials from both the NSA and GCHQ came forward, vehemently asserting that all the government was collecting was metadata, rather than the actual content of communications. This was seemingly to assure the public that their conversations weren’t being individually listened to. In any case, by the infamous words of then Home Secretary William Hague, “If you are a law-abiding citizen…then you should have nothing to fear.” And even though there were algorithms in place ever since post-9/11 anti-terrorism programs to detect patterns of buzzwords, he did have a point in that the sheer volume of digital communications being conducted at any given point in a place like the UK or the US would be physically impossible to thoroughly examine for their content. However, metadata, or the collection of data pertaining to the time at which connections are made, and their duration, is perhaps even more powerful in being able to detect patterns, and thus in turn predicting our future behaviour. Let’s be honest, we as humans are largely creatures of habit whose actions are mostly contingent on routine and cycles. Sure, you might think you’re a bit of a maverick, that you torrent the works of obscure Hungarian filmmakers in your spare time, or that you shop for both Back to the Future-themed keyfobs and Kurt Vonnegut novels on the same visit to Amazon (whaaat), or that Mongolian throat singing tutorial you were watching on YouTube the other day was totally wacky. But on the whole chances are that if you are an adult living in a late capitalist country today, all your actions, from toothpaste you buy, to your political leanings, to the type of porn you watch is both known and can be easily predicted based on similar patterns seen in the actions of millions of others.
So what we really see then with the Cambridge Analytica affair, is what happens when the supposedly democratic electoral process assumes the same functional structure as social media surveillance-based marketing. The use of social media in election campaigns is hardly new — Barrack Obama famously having used Facebook ads in his landmark 2008 campaign an entire decade ago. However, there has been a major change in the media landscape since then which brought on an entirely different dimension to the current issue, and that is the loss of credibility of the mainstream media in the eyes of the public. In Hypernormalisation, Adam Curtis’ latest documentary, he truly captures the feeling of living in our post-truth world, in which reality itself is in a state of perpetual flux. This is a world in which the concept of truth and facts are no longer commonly agreed upon, and particularly when faced with existential threats such as climate change and the collapse of capitalism, it becomes increasingly more difficult to even begin to comprehend the scale of the inhuman forces our lives are ultimately subject to. Therefore, within the framework of consumer capitalism, truth and reality themselves become consumer choices. A democratic vote becomes a lifestyle decision. Given the loss of trust in the mainstream media by the general public, there is a growing population reliant on getting their news from social media platforms, which already act as the ultimate echo chamber. When the political preferences and behaviours of users on those sites are quantified, fed through an algorithm, and used to feed back further information (much of which is likely to be false) which reinforces those same worldviews, you end up undermining one of the basic prerequisites of any democratic society — a properly informed electorate. What’s more, given that this power to essentially shape our perceptions of reality lies in the hands of capitalist corporations with nothing other than profit for motive, democracy yet again gets put up for sale to the highest bidder, as is evident with the revelation of Cambridge Analytica’s involvement in so many elections around the world.
Around the time of the Arab Spring in 2011, there was a somewhat naïve sense of optimism among the Western liberal commentariat, that the democratising potential of the Internet was finally coming to fruition. Citizens organised themselves using Facebook and Twitter to protest en masse and dictatorships toppled one by one throughout the Arab world. The discussion surrounding the ability of a new technology to challenge incumbent systems of power is nothing new either, whether with the movable type printing press or the television. If knowledge is power, then the ability to determine and control the production of knowledge is the source of that power. The Internet, like technologies which have come before it, was supposed to be the great leveller, the information superhighway which allowed for the ultimate dissemination of knowledge and political capital to the masses. Instead, seven years on, the Syrian civil war rages on, resulting in an unspeakable humanitarian crisis, Donald Trump is still in the White House (yes, even since the start of this article), and Mark Zuckerberg gives his testimonial regarding the Cambridge Analytica affair, claiming no knowledge of any wrongdoings. Of course he didn’t. And just like the NSA and GCHQ got off scot-free five years ago, and Edward Snowden is still deemed a traitor by the US government, Zuckerberg is almost definitely going to walk away from this completely unscathed. At some point in the latter half of the 20th century, the role politicians in Western countries changed from those enacting the will of the people to managers of public affairs with the ultimate profits of corporate interests in constant consideration. The senators and congressmen and women at his hearing are proposing regulations, some with the EU’s GDPR (general data protection regulation) coming into force next month in mind. However, as long as the technologies enabling surveillance in every aspect of our daily lives continue to proliferate and be profitable, and we blindly plug ourselves deeper into them, any such half-baked attempt at a legal solution will have absolutely no consequence. Facebook’s stock has certainly been rising.
Macro-level changes in history are always difficult to fully gauge, particularly while they are happening. After all, one could argue that the invention of the printing press in the 15th century only really bore fruit more than three centuries later with the Enlightenment, the French Revolution and the overthrow of feudalism in Europe. So perhaps, similarly, it is too difficult to tell at so early a stage in the advent of a technology like the Internet. However, now even more so than in 2013, the prospects look far less rosy than ever before.
Follow my crappy Twitter account here.