[ad_1]
In the end, even Silicon Valley has had to admit that there is something a bit sinister about facial recognition technology.
Momentum has been building in recent months. Arvind Krishna of IBM was first. He announced in June that his company would no longer offer automatic facial recognition (AFR) software. He also said it would not “condone uses of technology for mass surveillance, racial profiling, or violations of basic human rights and freedoms”.
This was dismissed — perhaps predictably — as halo-polishing on Krishna’s part. IBM was hardly at the cutting edge of the field, and clearly public relations played some part in the company’s coming out against it. But then Amazon suspended its own Rekognition technology — software which, infamously, misidentified 28 black members of Congress as criminals. Microsoft has also expressed concern that AFR could be misused.
Read more: Police use of facial recognition tech ruled unlawful by UK court
So perhaps we should not be too surprised that Ed Bridges, who took South Wales Police to court last year (and lost) after his face was scanned in the street, last week won his case on appeal.
Bridges had noted that, though South Wales Police said its use of AFR was “lawful and proportionate” when it hoovered up his biometric data, there had been no public consultation, no parliamentary debate, and not even a polite warning before a powerful and invasive new surveillance tool was cheerfully introduced into society.
The Court of Appeal found that his right to privacy had indeed been breached, and that the police force had failed to examine whether or not the software had a race or gender bias — which of course it does.
But the ruling does not make use of AFR in England and Wales illegal, as some have claimed. What it does is make AFR usable only in accordance with a clear and detailed legal framework.
That is progress of a kind — but the problems innate to AFR remain. As it is, the technology essentially automates racism and sexism, as Rekognition’s misidentification of Congress members showed. And, as the MIT Media Lab researcher Joy Buolamwini has suggested, accidental misidentification is not the only problem: the technology could deliberately be used in biased ways.
In any event, the proliferation of AFR, which remains likely, would still reduce individual people to strings of biometric data and make whole groups easy to sort into neat categories by any government or private company so inclined.
If only we had listened to Luke Stark. He is the Microsoft researcher so alarmed by the potential of AFR to change society for the worse that he has said it ought to be treated like nuclear waste. AFR, Stark pointed out in April last year, has “insurmountable flaws” at the technical level which “reinforce discredited categorisations around race and gender”. For this and other reasons, the technology is “intrinsically socially toxic, regardless of the intentions of its makers.”
He is not alone in thinking this. Timnit Gebru, technical co-lead of Google’s Ethical Artificial Intelligence Team, told a summit in Geneva last May that there are “huge error rates” in identification by skin type and gender. Study after study confirms this.
For the moment at least, we can chalk up the Court of Appeal’s ruling as a win and a world-first which reflects the growing understanding of how sophisticated technology can undermine both personal freedom and social equality, by reducing the individual to “data” and increasing and intensifying existing discrimination. But an honest, open conversation about the role of this kind of technology in our society still has yet to be had — and innovation has an inconvenient way of outpacing the law even at the best of times. An immediate ban on the use of AFR in public is the only option.
Read more: Beware the chilling consequences of facial recognition technology on society
Main image credit: Getty
[ad_2]
Source link