The use of facial recognition technology (FRT) by British police has violated several ethical and human rights obligations, a report released by the University of Cambridge said on Thursday.
The report, published in the Mindru Centre for Technology and Democracy, conducted an audit of the use of FRT technology by the Metropolitan Police and South Wales Police. Three main issues are outlined in the report: privacy, discrimination and accountability. The right to privacy is a protected right under the Human Rights Act 1998 and can only be interfered with if it is “consistent with the law and necessary in a democratic society”. However, the report concluded that the use of FRT was “very broad” and therefore may not meet the requirements outlined in the Human Rights Act.
The second issue is related to discrimination, which is a common concern in large FRT projects because of the potential bias in AI technology. According to the Equality Act 2010, public authorities “must, in the exercise of their functions, have due regard to the need to eliminate discrimination”. However, the report concluded that:
Deployments do not transparently assess bias in technology or discrimination in use. For example, the Metropolitan Police Department did not release an assessment of racial or gender bias in the technology prior to its trial of real-time facial recognition. They also don’t release the demographics that lead to arrests, making it difficult to assess whether the technology will perpetuate racial profiling.
The final issue discussed was the focus on accountability and oversight. The report found, “[t]There’s also no clear remedy here for people harmed by the use of facial recognition. In addition, “the ethics body that oversees police in South Wales [FRT] The trial has no independent experts on human rights or data protection, according to available minutes. “
Overall, while modern society will certainly use FRT, as the report concludes, “we must ask what values we want to embed in the technology.”