You are being watched. The government has a secret system —a machine— that spies on you every hour of every day. I know, because I built it. I designed the machine to detect acts of terror, but it sees everything.
So begins the opening monologue on the CBS television show Person of Interest, spoken by the designer of ‘The Machine’, technology genius and billionaire, Harold Finch. The Machine is a mass surveillance computer system, monitoring data input from just about every electronic source in the world (phones, cameras, computers etc), which it analyzes in order to predict violent acts. But given its omnipotence, there are far too many predictions to act on, and so instead it is programmed to only pass on ‘relevant’ threats – ie. major terrorist events – to the government.
The procedural element of the show is that Finch has a software backdoor that sends him the ‘irrelevant’ predictions so that he can try to stop that violent act occurring as well: each episode, he and his small team of law enforcement officers and former government agents are given the social security number of an individual connected to the threat, though the team do not know if the individual is the victim or the perpetrator.
The larger story arc, however, is all about the Machine – how Harold came to build it and the effect of doing so on both him and those around him; the power that such surveillance hands to whomever controls it, and the lengths some would go to in order to have that control; and what might happen if such a powerful ‘intelligence’ became sentient. And of course, the question that hangs over the entire storyline, is the debate between how surveillance can be used to keep people safe, versus how it can be used in corrupt ways.
The show is science fiction, but given the news stories listed below, we might say only barely – the Person of Interest future doesn’t seem that far off at all.
Surveillance via your own smartphone
We already know that smartphones can track everywhere you go via the built-in GPS, and the Person of Interest team certainly utilise that function to their advantage. But in the show, Finch’s team also often take advantage of a hack which “force pairs” their smartphone with the person of interest’s phone, allowing them to hear every conversation within hearing distance of the phone. It’s a rather helpful – if clumsily powerful – plot device, allowing them to eavesdrop from a distance with the simple touch of a button.
But in the real world, where phones aren’t always in microphone mode, and are a bit tougher to hack, the nearest thing might be this recent hack by security researchers from Stanford University, who discovered a way to turn the gyroscopes within modern smartphones into crude microphones:
Here’s how it works: the tiny gyros in your phone that measure orientation do so using vibrating pressure plates. As it turns out, they can also pick up air vibrations from sounds, and many Android devices can do it in the 80 to 250 hertz range — exactly the frequency of a human voice.
And if you think this sort of surveillance might be fairly unlikely in the real world, you should definitely watch this talk by Jacob Appelbaum on the militarisation of the internet and hijacking of technologies for surveillance (transcript here). As one pundit noted, “No matter how bad you think the NSA’s information surveillance and capture is, I can just about guarantee that this will show you that it’s an order of magnitude worse than you imagined”:
I’m betting by the time you get to the bit about the NSA’s ability to radiate people with up to 1KW of RF (under the heading of “Specialized Philip K. Dick inspired nightmares”) you’ll be getting a little freaked out about our brave new world…
The NSA is harvesting millions of facial images from the web for facial recognition
The Machine uses facial recognition to identify every face it picks up, across the entire world, through surveillance cameras. While that sort of observational power is likely to be beyond current technology (from the point of view of the public at least), it’s something that the spooks at the NSA would love to have at their disposal. One of the leaks from NSA whistleblower Edward Snowden detailed how the agency is harvesting millions of facial images from the Web for use in a facial recognition program called ‘Identity Intelligence’. And what’s more, the NSA is linking these facial images with other biometrics and identity data:
The NSA’s goal — in which it has been moderately successful — is to match images from disparate databases, including databases of intercepted videoconferences (in February 2014, another Snowden publication revealed that NSA partner GCHQ had intercepted millions of Yahoo video chat stills), images captured by airports of fliers, and hacked national identity card databases from other countries. According to the article, the NSA is trying to hack the national ID card databases of “Pakistan, Saudi Arabia and Iran.”
Note too that the FBI recently caught a fugitive who had been on the run for 14 years through facial recognition – their software identified the fugitive’s face on a visa application submitted under a fake identity, based on an image on file. The FBI’s system seems fairly basic, but its likely that alphabet soup agencies have much more power at their fingertips then they divulge to the public…
What can you do about the future chances of being tracked via facial recognition? Not a whole lot, although Daily Grail editor Cat Vincent does discuss some possibilities in this earlier post on the NSA’s facial recognition program.
All-seeing surveillance
When it comes to all-seeing surveillance, one of the tropes of the conspiracy genre is the ‘all-seeing eye in the sky’ – satellites with super-high resolution cameras, able to watch our every move. And there’s no doubt, the number of satellites, and their capabilities, is constantly expanding – and what’s more, it’s not just a government thing anymore. But with the development of drone technologies, space-based surveillance now seems a whole lot more redundant: why put a camera hundreds or thousands of kilometres away, when you can have a mobile camera based immediately above locations? Welcome to the new world of Wide Area Aerial Surveillance (WAAS) via drones:
Systems such as Gorgan Stare, ARGUS, Vigilant Stare and Constant Hawk are all developmental iterations of the Pentagon’s goal to be able to continuously survey a whole village, or even an entire city, via a single ‘sensor platform,’ or just a handful of systems that are networked together. Additionally, these systems are to allow multiple “customers,” or end users, to manipulate and leverage their collected video data in real time.
Generally, the Wide Area Aerial Surveillance (WAAS) concept works by taking a high-endurance aerial sensor “platform,” such as a MQ-9 Reaper or IAI Eithan unmanned aircraft system, or a blimp or airship, and marrying it with a WAAS type sensor payload. This payload usually consists of canoe shaped pod that has high-resolution electro-optical sensors pointed in many directions in a fixed manner. Onboard computing and software can then stitch these staring cameras’ “pictures” together, creating a continuous high resolution overall image of a large swath of land or sea below. Users can then instruct the WAAS system to send them a high-resolution live video feed of a certain area of that massive ‘fused’ picture. The WAAS system then data-links down the video of the geographical area requested by the user. Thus the user will have real time streaming video imagery of a portion of the entire area WAAS is persistently viewing at any given time.
Keep increasing that video fidelity, throw in some advanced facial recognition tech, along with GPS tracking and an always-on microphone in your phone…and a Person of Interest scenario doesn’t seem that far off. All that is maybe lacking is the presence of ‘The Machine’. But are we even that far from that scenario?
The rise of Artificial Intelligence
News stories touching on advances in artificial intelligence (AI) seem to be coming thick and fast lately, from a machine learning algorithm finding things in fine art paintings that art historians had never noticed through to a robot that studies YouTube videos to learn all about humans and how they interact with the world. And, three years on from the debut of the ‘basic’ AI of Apple’s Siri, ‘her’ inventors are said to now be building a radical new AI that will do anything you ask.
Perhaps the most interesting of all though is Google’s interest in AI. A recent news report discussed how Google is using “an algorithm to automatically pull in information from all over the web, using machine learning to turn the raw data into usable pieces of knowledge”, which will become the Knowledge Vault, “a single base of facts about the world, and the people and objects in it”.
Knowledge Vault promises to supercharge our interactions with machines, but it also comes with an increased privacy risk. The Vault doesn’t care if you are a person or a mountain – it is voraciously gathering every piece of information it can find.
“Behind the scenes, Google doesn’t only have public data,” says Suchanek. It can also pull in information from Gmail, Google+ and YouTube. “You and I are stored in the Knowledge Vault in the same way as Elvis Presley,”
And earlier this year, it was revealed that Google had bought AI start-up DeepMind for an estimated $400 million, bringing it under the umbrella of their Google X division, a facility dedicated to making major technological breakthroughs. The reasoning for their interest in AI sounds almost like a description of the Machine in Person of Interest:
Much of the fundamental infrastructure within Google is based on language, speech, translation, and visual processing. All of this depends upon the use of so called Machine Learning and AI. A common thread among all of these tasks and many others at Google is that it gathers unimaginably large volumes of direct or indirect data. This data provides what the company calls “evidence of relationships of interest” which they then apply to adaptive learning algorithms. In turn these smart algorithms create new potential opportunities in areas that the rest of us have yet to grasp. In short, they might very well be attempting to predict the future based on the search/web surfing habits of the millions who visit the company’s products and services every day. They know what we want, before we do.
A whole new level of control
Most of us are familiar with social networks nagging us to complete our profile, adding in those few bits of private information that we haven’t yet shared with a faceless corporation. In one particular episode of Person of Interest, Finch and his ex-CIA ‘muscle’ John Reece discuss the amount of information that people happily put up on social networks:
Reece: Never understood why people put all their information on those sites. Used to make our job a lot easier in the CIA.
Finch: Of course, that’s why I created them.
Reece: You’re telling me you invented online social networking Finch?
Finch: The Machine needed more information. People’s social graph, their associations…the government have been trying to figure it out for years. Turns out, most people were happy to volunteer it. Business wound up being quite profitable too…
Surveillance, at its heart, is a method of control. But the growth of social networks – and in particular the move to ‘filtered feeds’, where you only see a fraction of the data stream directed at you, determined to be the most relevant to you by certain algorithms – now allows a less passive, and more fine-grained, pernicious type of control: direct influence over your mind. Once the social network takes control over what we see in our timeline or stream, direct manipulation becomes a real possibility. Exhibit A: a recent Facebook experiment in which user feeds were manipulated to see whether their emotions could be affected:
For one week in January 2012, data scientists skewed what almost 700,000 Facebook users saw when they logged into its service. Some people were shown content with a preponderance of happy and positive words; some were shown content analyzed as sadder than average. And when the week was over, these manipulated users were more likely to post either especially positive or negative words themselves.
This tinkering was just revealed as part of a new study, published in the prestigious Proceedings of the National Academy of Sciences. Many previous studies have used Facebook data to examine “emotional contagion,” as this one did. This study is different because, while other studies have observed Facebook user data, this one set out to manipulate it.
The near-ubiquitousness of Facebook, and the ability to filter feeds, opens a whole Pandora’s box of control. What if Facebook expands into other business fields, then modifies feeds to only feature products from their subsidiaries and to not show competitor’s products? Or more dramatically, what if the control is of a political or ethical nature, censoring certain ‘news’ that particular people don’t want seen? The recent Ferguson demonstrations offer an excellent example of the difference in exposure to certain news stories between the algorithmic filtering of Facebook and the ‘raw feed’ of Twitter. And yet, there are now suggestions that Twitter’s timeline too will soon be filtered.
And if you think that it’s at least a small blessing that the government can’t control large corporations like Facebook, just remember that didn’t stop them from being forced to help the government spy on us all – Yahoo was even threatened with a $250K daily fine if it didn’t use PRISM.
In the end, it seems we might all be persons of interest to the government…
If the above article hasn’t scared you off social media, you can keep up to date with more fascinating stories like this one by liking The Daily Grail’s Facebook page, following us on Twitter, and/or putting us in your Google+ circles.
You might also like: