Facial recognition technology (FRT) may need to be regulated in much the same way as some ethically sensitive medical techniques to ensure there are sufficient safeguards in place to protect people's privacy and freedoms.
That's according to Professor Fraser Sampson, the UK Government's Surveillance Camera Commissioner (SCC), who works with the Home Office overseeing tech-related surveillance in the UK.
He was responding to last week's report by the Geneva-based Human Rights Council (HRC) which argued that the protection of human rights should be at the heart of the development of AI-based systems including areas such as law enforcement.
The report went on to say that unless sufficient safeguards are in place to protect human rights, there should be a moratorium on the sale of AI systems and those that fail to meet international human rights laws should be banned.
Now, the SCC has added his voice to the debate as lawmakers around the world attempt to create a workable legal framework in the face growing calls for human rights protections.
"This is a fast-evolving area and the evidence is elusive but it may be that the aspects currently left to self-determination present the greatest risk to communities or simply to give rise to the greatest concern among citizens," he told The Register.
"It may even be the case that some technological biometric and surveillance capabilities such as FRT are so ethically fraught that they can only be acceptably carried out under licence in the future - perhaps akin to the regulatory arrangements for human fertilisation and embryology.
"That is a matter of policy for others," he said.
Professor Sampson: "But we need as a minimum a single set of clear principles by which those using the biometric and surveillance camera systems will be held to account, transparently and auditably."
Asked to comment further on the HRC's report he told us: "Where biometric surveillance systems are being bought with public money and deployed in the public interest then there is surely a legitimate expectation that all parties will adopt an ethical and human rights compliant approach."
"I agree that, if used without sufficient regard to how they affect people's human rights, the emerging technological capabilities in the area of surveillance and biometrics can be negative and potentially catastrophic," he added.
The use of AI and technologies such as FRT has recently been the subject of governmental scrutiny both in the UK and the US.
In 2019, London's Metropolitan Police deployed a system that was not only extremely inaccurate, but led to them arresting people based on dodgy matches anyway.
In May of that year, Met cops fined a man for covering his face while conducting a test of the technology in Romford, London.
In August 2020, a court of appeal found that use of facial recognition technology by South Wales police had been unlawful.
In April, the EU published its own proposals for harmonised rules on artificial intelligence (Artificial Intelligence Act) where it too recognised the benefits while acknowledging the "new risks or negative consequences for individuals or the society." ®
Yuan 1.0 said to pass Turing test, and require many fewer GPUs than the GPT-3 Microsoft licensed from OpenAI
Would you like AI with that?
Vercel boss Guillermo Rauch speaks to The Reg about Rust, WebAssembly, Node TypeScript, and more
Universe event reveals iterative improvements but no big bang
'Consent cannot be truly freely given' says privacy campaigner
Company recommends migration for those unhappy with current functionality
Transparency, algorithmic accountability, and much more - tune in online next week for free