Home Latest Facial recognition is tech so special it should be banned

Facial recognition is tech so special it should be banned

0
Facial recognition is tech so special it should be banned

[ad_1]

In the 15th century, an insidious scourge stalked Europe. It threatened to put people out of work, ruin their brains, and even take them further away from God. That evil? According to Abbot Johannes Trithemius, it was the printing press.

Now, of course, the printing press and its many effects are seen as not just good, but foundational to modern societies — despite the fact that the printing press was also used to produce the Adolf Hitler manifesto “Mein Kampf.”

But this is how we tend to deal with technology: as an often-ambivalent thing around which we work to highlight the positive and mitigate the negative. It’s true for everything from cars to TV to smartphones.

Are there, however, some technologies so heavily slanted to the negative that we should just outright ban them?

It’s a question that has been on my mind since facial-recognition technology shifted from concept to reality. It can be used to, say, unlock your phone, but extends to systems of cameras and artificial intelligence to detect and identify people from afar. Perhaps most prominent and worrying are systems like those of Clearview AI, a company that can deploy this tech to capture and recognize faces at a social scale.

The company became subject to scrutiny in February, when Canada’s Office of the Privacy Commissioner first announced an investigation. Earlier this month, the company stopped offering its software for use here — notably including law enforcement — pending the commissioner’s verdict. Canadians can also ask to have any record of their face stripped from the company’s database, though this requires submitting an image of their face.

What is it about facial recognition specifically that is so worrying? Facebook, TikTok and almost every social app we use track us and many details of our lives. Further, it seems there are some helpful potential applications for facial recognition. Imagine, for example, being able to locate a lost child wandering the streets, or track an armed shooter or terrorist. These aren’t insignificant things.

But those possible benefits aren’t worth the risk. First, consider that facial recognition software is frequently inaccurate. In the U.S., the National Institute of Standards and Technology found that the tech often misidentified people, particularly people of colour. In Detroit, a Black man was falsely arrested after an error in facial recognition.

That inaccuracy isn’t only troubling for being wrong, however: it’s that the mistakes already compound the imbalances that already exist. We are living through a moment in which people are beginning to recognize that policing and law enforcement have structural and cultural issues that make them more likely to target Black, Indigenous and other marginalized people. If facial recognition exacerbates that kind of prejudice, it seems good reason to delay its use — perhaps indefinitely.

It’s more than just that, however. As Birgit Schippers from University College Belfast argues, the broad use of facial technology has a chilling effect on democracy. Imagine if you couldn’t attend a protest or march, or even have an illicit beer in a park, without wondering whether your face would be captured and your identity passed on to law enforcement.

Facial recognition is thus part of a broader cultural and technological shift that threatens to entrench systemic bias, a culture of surveillance, and a risk of bad actors hacking technology to put it to nefarious ends. The combination of facial recognition with the increasing ubiquity of sensors, high-resolution cameras in everyone’s pockets, and more and faster artificial intelligence makes for an insidious, terrifying future in which privacy is forever under threat and the state has more and more power at its disposal.

In short, facial recognition simply shouldn’t be legal. The risks are too great. It is true that the idea of refusing to allow technology seems opposed to something basic within us. For one, technology is not just ambivalent, but is also unpredictable: how could the Apple engineers who designed the iPhone predict that, one day, Instagram would change how restaurants are physically designed? How tech can be applied is almost impossible to predict.

Get the latest in your inbox

Never miss the latest news from The Spectator, including up-to-date coronavirus coverage, with our email newsletters.

Sign Up Now

Yet the point of accepting technology’s downsides comes from a more basic human desire: That idea that technology, even when ambivalent, ultimately exists to safeguard against the potential brutality of life. I may be opposed to guns generally but for people who might, say, come face to face with a polar bear, I am glad that guns exist.

But perhaps facial-recognition technology is more akin to nuclear weapons than guns: so potentially broad and indiscriminate in scope that it is better to simply ban it than try and keep it under control. Yes, technology is our bulwark against the harshness of life. But there is certain tech that is itself so brutal, it is better to refuse it — to recognize it for the scourge it is and simply say no.

Navneet Alang

Navneet Alang is a Toronto-based freelance contributing technology columnist for the Star. Follow him on Twitter: @navalang



[ad_2]

Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here