[ad_1]
The news that schools in North Ayrshire in Scotland have introduced facial recognition technology to support payments in their canteens prompts many questions. The company behind the scheme, CRB Cunninghams, says valuable time will be saved by speeding up the process whereby children queue up and pay. North Ayrshire council says that 97% of children or their parents have given their consent. Alternative ways to buy food will be provided for the rest.
But this deal between a local authority and a technology company, whose stated aim is to remove cash from UK schools, is not the no-brainer that its backers present it as. Facial recognition technology is still relatively new. Its various forms and applications are unfamiliar to most people and their use remains controversial. The Scottish meal payment system is said to differ from “live” facial recognition software, where computers scan through crowds to match faces. Encrypted templates of the children’s faces will be stored on the schools’ servers. But privacy campaigners and others are rightly concerned about the decision to make facial scanning a part of children’s daily routines.
The use of biometric fingerprints has been widespread in UK school canteens for years. Time pressures aside, it is easy to see why headteachers and other managers were keen to move away from cash, which has come to seem messy and labour-intensive in our electronic age. But just because pupils and their families have become accustomed to the use of some biometric data, it does not follow that these systems should be extended. Nor does the idea that facial recognition is a more Covid-secure technology than fingerprinting, as has been suggested with regard to North Ayrshire, provide sufficient justification for the decisions that have been taken. On the contrary, the deal with CRB Cunninghams should be seen as a significant step towards the normalisation of the use of facial recognition technology by public authorities.
What people think about this, and similar developments, depends on the importance that they place on privacy and personal data, and the extent to which they trust technology companies to handle them. There is no question that companies are eager to test out new capabilities, and work out how to make money out of them. In several countries, the use of such technology has been found to be illegal. Last year, the court of appeal ruled that the use of facial recognition technology by police in Wales breached privacy and equality laws. In the US and Sweden, schools have been stopped from using it to monitor attendance or security.
Typically, the buyers and sellers of these systems present them as useful tools and nothing more. But as Prof Kate Crawford, the author of a recent book about AI, and other critics have pointed out, the companies at this point are running ahead of democratic debate and decision-making. The challenges of how to regulate and secure consent for the kinds of information gathering that digital technology makes possible are a long way from being answered. And while this is the case it is ethically dubious, to put it mildly, to use children as guinea pigs.
[ad_2]
Source link