Is facial recognition tech the ultimate invasion of your privacy?

In future, we may have to walk around in disguise in order to protect ourselves from government's invasive eyes

03 February 2019 - 00:09
Xu Li, CEO of SenseTime Group, is identified by the company's facial-recognition system on a screen at SenseTime's showroom in Beijing. Facial recognition technology could soon be used to collects a database of information on ordinary citizens.
Xu Li, CEO of SenseTime Group, is identified by the company's facial-recognition system on a screen at SenseTime's showroom in Beijing. Facial recognition technology could soon be used to collects a database of information on ordinary citizens.
Image: Gilles Sabrie/Getty Images

One of my favourite things to do is people watch. On any given day when I have time to spare, I make my way to a public space and guess at people's conversations - are they engaged in light banter or in the midst of an argument? I also guess their ages, and without thinking about it, I try to figure out what they do in life because without even knowing it, my brain automatically thinks that lawyers look a certain way, artists look a certain way and scientists look a certain way.

When I do this, I am doing something secondary - I am also determining their social standing to a degree and even by default the amount of privilege they have access to.

This is called physiognomy: the notion of what can or cannot be read in a face and it dates back to ancient times. Jog a few centuries later and we find ourselves in a time of technological harnessing, where an arsenal of tools has been and is being designed to dig deep into the one physical identifier we can almost never change - our faces.

Artificial intelligence forces us to face a future where we are unwillingly disclosing personal information about ourselves that can be used against us.

Now you might ask: what's the big deal? Face-recognition skills have long been used as a security mechanism. CCTV is often an effective way to identify crimes and catch criminals who have long been sought. Even the capturing of biometric data is old news. You can unlock your phone with your face these days and in some countries labs are working on doing the same for credit cards: you won't need a pin, you'll just need a selfie.

And while these measures have their advantages (people may be able to guess a number but they can't fake your looks) they also come with massive ethical infringements.

These advances mean that not only is Big Brother watching but so is Big Father, Big Mother and the parent of them all, Big Data

There are even humans who are regarded as super identifiers. A lot of them are, in fact, being studied because if we understand how their brains work, we can create the same algorithms in computers and double, even triple the amount of super recognition, serving us a hot platter of very little moderation and a lot of mania. These advances mean that not only is Big Brother watching but so is Big Father, Big Mother and the parent of them all, Big Data.

In China for example, police officials wear smartglasses that can spot suspects in crowded places. Smartglasses enable government to surveil citizens far beyond the naked eye, so, while recognising criminals, the technology also collects a database of ordinary citizens - a database that includes habits, social credit and even people's friends and connections.

So when the database is accessed, the government has a wealth of information on you, including granular information like where you shop and which brand of milk you prefer.

When is too much surveillance too much? Human-rights activists will argue that as soon as someone is infringing on your privacy without your explicit permission, the line has already been crossed, and so it has, many times.

Can artificial intelligence moderate its own moral compass? No, is the short answer. It cannot. I may guess at someone's social standing and career based on their faces, but I can correct it. The likelihood of systems correcting themselves is zero to, well, zero.

Take for example the facial-testing programmes that are being rolled out in Chinese schools to analyse and store students' data and help teachers recognise whether kids are paying attention in school or not. The advantage: the ability for educators to help kids perform better. The disadvantage: when you rely on a robot to tell you what an organic human expression means, what you get is exactly that - robotic results, and ultimately the policing of human expression.

You are threatening a human's right to private thoughts which oftentimes result in public facial expressions. Gazing at the ceiling because I am thinking about the cheese sandwich in my backpack is not the universal rule of thumb of a learning disability.

Has the stuff of fiction become the stuff of threat? Facebook already tailors ads according to our search history but in many cases those ads are not always tailored to our ages or genders.

Sixteen years ago, the film Minority Report was released. In it advertising is personalised according to what the facial-recognition data says. Tom Cruise's character walks down the street and is bombarded with customised adverts for cars, beverages, everything. If a hacker with bad intentions knows your age, gender, where you shop and what you're most likely to buy, it opens you up to a whole universe of scamming.

Is walking around in disguise the only reasonable way to protect ourselves in the future? Perhaps the faceless hacking group (ironically), Anonymous, is on to something. Maybe they know something we don't know and that's why they are always wearing the Guy Fawkes mask with smirking ghost faces, thin black mustaches, and narrow, lifted eyebrows. Maybe that's the only way to save face in a not-so-distant future.


X