[ad_1]
The soundtrack of school students marching through Britain’s streets shouting “f*** the algorithm” captured the sense of outrage surrounding the botched awarding of A-level exam grades this year. But the students’ anger towards a disembodied computer algorithm is misplaced. This was a human failure. The algorithm used to “moderate” teacher-assessed grades had no agency and delivered exactly what it was designed to do. It is politicians and educational officials who are responsible for the government’s latest fiasco and should be the target of students’ criticism.
It was understandable that ministers wanted to moderate teacher-assessed grades once they had decided to scrap this year’s A-level exams following the spread of the coronavirus pandemic. The natural tendency of many teachers is to err on the side of optimism. Whereas 25.2 per cent of students achieved A* and A grades in 2019, teachers predicted that 37.7 per cent would do so this year.
Ministers rightly argued that excessive grade inflation for the 2020 cohort would be unfair to students in preceding and subsequent years. Universities, which are often contractually obliged to accept students who meet their offers, would also struggle to accommodate a big increase in numbers. Sadly, as a result of the government’s incompetence and policy reversal, that is exactly the situation we now face.
Sensibly designed, computer algorithms could have been used to moderate teacher assessments in a constructive way. Using past school performance data, they could have highlighted anomalies in the distribution of predicted grades between and within schools. That could have led to a dialogue between Ofqual, the exam regulator, and anomalous schools to come up with more realistic assessments.
As it was, Ofqual discarded teacher-assessed grades for all but the smallest cohorts, focused on student rankings within schools and imposed a “fair” distribution of results to prevent excessive grade inflation. That approach may have been collectively justifiable but it was, in many cases, individually unjust. That brute force methodology disadvantaged some of the students most deserving of recognition, penalising outliers. Even the best student in the country in mathematics, attending a school that had performed poorly in the past, might not have been awarded an A* grade.
There are broader lessons to be drawn from the government’s algo fiasco about the dangers of automated decision-making systems. The inappropriate use of such systems to assess immigration status, policing policies and prison sentencing decisions is a live danger. In the private sector, incomplete and partial data sets can also significantly disadvantage under-represented groups when it comes to hiring decisions and performance measures.
Given the severe erosion of public trust in the government’s use of technology, it might now be advisable to subject all automated decision-making systems to critical scrutiny by independent experts. The Royal Statistical Society and The Alan Turing Institute certainly have the expertise to give a Kitemark of approval or flag concerns.
As ever, technology in itself is neither good nor bad. But it is certainly not neutral. The more we deploy automated decision-making systems, the smarter we must become in considering how best to use them and in scrutinising their outcomes. We often talk about a deficit of trust in our societies. But we should also be aware of the dangers of over-trusting technology. That may be a good essay subject for next year’s philosophy A-level.
[ad_2]
Source link