“Everyone will be tracked, cradle to grave, with no possibility of escape.” The Circle by Dave Eggers
For both public and private organisations, the incentive for increased data collection and monitoring has never been higher. In a period where global security and health concerns necessitate information sharing functions, and employees increasingly combine home and work environments, the divide between public and private life is becoming increasingly blurred. Individuals’ actions, represented in the form of geolocation data, facial recognition images and browser history logs, are increasingly tracked, monitored and recorded by surveillance technologies used by law enforcement authorities, employers and consumer industries. These approaches can be seen in the deployment of contact-tracing apps and healthcare monitoring technologies, facial recognition, CCTV and other security systems.
However, as the value of data to corporations and policy makers has increased, two risks have grown in magnitude: the temptation to combine data in increasingly intrusive ways, and the attraction of increasingly complex data troves to cyber criminals. The key question for our debate is, where technologies are used to keep individuals safe, who ensures their usage remains accountable? Do we opt for a centralised model – as the UK’s contact tracing app has taken – or provide individuals with more control over their information? Where systems increasingly rely on information streams to function, what mechanisms need to be introduced to address the security and privacy risks they pose?
The Privacy Perspective by Kaveh Cope-Lahooti
One challenge to monitoring technologies is to trade off the convenience and safety that technologies such as CCTV and biometrics present with the demands of both privacy principles (including data minimisation and transparency) and individual rights (such as to opt-out and challenge such collection), particularly those given by the data protection regimes since the GDPR’s enactment in 2016.
Notably, most technologies relying on Big Data – including contact tracing apps and the training of facial recognition systems – are most efficient by collecting data in an “n=all” or indiscriminate format (what Shoshana Zuboff’s in Surveillance Capitalism deems “surplus at scale”). By its nature, this disregards several core tenets of data protection. I would opine that most individuals would recognise the value from smart home technologies that can automatically turn on the heating on the basis of occupants’ behavioural patterns, or the security need for monitoring employees’ building access logs. However, the temptation (and tendency) with mass data collection and surveillance can often be to exploit the short and mid-term economic gains without affording consideration to the secondary matter of the loss of users’ privacy – until either a data sharing scandal (cf. Cambridge Analytica, 2018) or a data breach (cf. Marriott International, 2018) occurs.
Moreover, recent advances in legislation and standards have tilted the balance from a risk perspective by increasing the potential fallout from a misuse of citizens’ or customers’ data. With the growth in consumer attention towards privacy, in addition to the application of data protection, employment and human rights legislation, we are finding that are increasing reputational and compliance benefits to maintaining sound data governance. This creates the need to examine how organisational governance mechanisms are able to ensure privacy is adequately considered, or the extent to which industry or state regulation is necessary.
The Security Perspective by James Weston
Although it’s slightly more nuanced, for the purpose of this blog I’m going to broadly break down security into two components. The first is cyber security, which, at a high level, is principally concerned with protecting data or a process from unauthorised access, change, or manipulation (I did say at a high level!). As such it seeks to constrain as many technical variables as possible and ensure that systems are used as intended.
The second is the more traditional view of security, and that is the security of our physical communities and populations. In the debate around privacy and security, it is this component we are focussed on. Technology, and the huge amounts of seemingly inconsequential data that we produce every minute of every day, is a resource that many see worthy of taking advantage of to ensure our physical security.
It goes without saying that the more data we gather, the more we can model, predict, and identify concerning patterns of behaviour before they escalate. We can generate insights more quickly and more accurately than ever before by correlating these data sources to provide an insight greater than the sum of its parts. And surely that is a good thing?
We are reliant upon more organisations than we are aware of to collect, collate and aggregate our data, and then make societal shaping decisions out of it. And while the intentions may be just, what happens when mission creep sets in? Or when our data is exposed through theft or data breach?
And I am not just talking about your date of birth, address, or bank account details – but all the accumulated data points from your life which make you, you! If these were all exposed, what is the collateral digital damage to an individual?
My observation is that most of the population are consumers of technology, not users. We are asked every day to make decisions on our technology and our data that most of us are not fully equipped to review critically. I want to end on the often-spouted line that fills me with dread. If you have nothing to fear, you have nothing to hide. This, alongside the justification of “greater” security, is often used as an all-consuming term to drum up support. However, we cannot yet see how in the future our data will be used and examined, and ultimately what the real-world consequences of this will be. As a digital society, we need to place a much higher value on all data we generate going forward.
Want to hear more on this? These topics, as well as others, will be further discussed from privacy, security and risk perspectives in Gemserv’s latest GemTALK webinar, Who Watches the Watchmen, featuring Michelle Griffey, Chief Risk Officer at Communisis. Register using the link below: