UK wants criminal migrants to scan their faces up to five times a day using a watch

In brief The UK's Home Office and Ministry of Justice want migrants with criminal convictions to scan their faces up to five times a day using a smartwatch kitted out with facial-recognition software.

Plans for wrist-worn face-scanning devices were discussed in a data protection impact assessment report from the Home Office. Officials called for "daily monitoring of individuals subject to immigration control," according to The Guardian this week, and suggested any such entrants to the UK should wear fitted ankle tags or smartwatches at all times.

In May, the British government awarded a contract worth £6 million to Buddi Limited, makers of a wristband used to monitor older folks at risk of falling. Buddi appears to be tasked with developing a device capable of taking images of migrants to be sent to law enforcement to scan.

Location data will also be beamed back. Up to five images will be sent every day, allowing officials to track known criminals' whereabouts. Only foreign-national offenders, who have been convicted of a criminal offense, will be targeted, it is claimed. The data will be shared with the Ministry of Justice and the Home Office, it's said.

"The Home Office is still not clear how long individuals will remain on monitoring," commented Monish Bhatia, a lecturer in criminology at Birkbeck, University of London.

"They have not provided any evidence to show why electronic monitoring is necessary or demonstrated that tags make individuals comply with immigration rules better. What we need is humane, non-degrading, community-based solutions."

Talk to Meta's AI chatbot

Meta has rolled out its latest version of its machine-learning-powered language model virtual assistant, Blenderbot 3, and put it on the internet for anyone to chat with.

Traditionally this kind of thing hasn't ended well, as Microsoft's Tay bot showed in 2016 when web trolls found the correct phrase to use to make the software pick up and repeat new words, such as Nazi sentiments.

People just like to screw around with bots to make them do stuff that will generate controversy - or perhaps even just use the software as intended and it goes off the rails all by itself. Meta's prepared for this and is using the experiment to try out ways to block offensive material.

"Developing continual learning techniques also poses extra challenges, as not all people who use chatbots are well-intentioned, and some may employ toxic or otherwise harmful language that we do not want BlenderBot 3 to mimic," it said. "Our new research attempts to address these issues.

Meta will collect information about your browser and your device through cookies if you try out the model; you can decide whether you want the conversations logged by the Facebook parent. Be warned, however, Meta may publish what you type into the software in a public dataset.

"We collect technical information about your browser or device, including through the use of cookies, but we use that information only to provide the tool and for analytics purposes to see how individuals interact on our website," it said in a FAQ.

"If we publicly release a data set of contributed conversations, the publicly released dataset will not associate contributed conversations with the contributor's name, login credentials, browser or device data, or any other personally identifiable information. Please be sure you are okay with how we'll use the conversation as specified below before you consent to contributing to research."

Reversing facial recognition bans

More US cities have passed bills allowing police to use facial-recognition software after previous ordinances were passed limiting the technology.

CNN reported that local authorities in New Orleans, Louisiana, and in the state of Virginia, are among some that have changed their minds about banning facial recognition. The software is risky in the hands of law enforcement, where the consequences of a mistaken identification are harmful. The technology can misidentifying people of color, for instance.

Those concerns, however, don't seem to have put officials off from using such systems. Some have even voted to approve its use by local police departments when they previously were against it.

Adam Schwartz, a senior staff attorney at the Electronic Frontier Foundation, told CNN "the pendulum has swung a bit more in the law-and-order direction."

Scott Surovell, a state senator in Virginia, said law enforcement should be transparent about how they use facial recognition, and that there should be limits in place to mitigate harm. Police may run the software to find new leads in cases, for example, he said, but should not be able to use the data to arrest someone without conducting investigations first.

"I think it's important for the public to have faith in how law enforcement is doing their job, that these technologies be regulated and there be a level of transparency about their use so people can assess for themselves whether it's accurate and or being abused," he said. ®

polling(238,"hide hide_when_voted hide_show_results")

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Oct 6
Microsoft warns: Windows 11 update breaks provisioning

And it wouldn't be a Redmond OS update without printing issues

Oct 6
Microsoft warns admins that Windows 11 update breaks provisioning

Plus: It wouldn't be a Windows update without printing issues

Oct 6
Nuh-uh, Meta, we can do text-to-video AI, too, says Google

Brace yourself for a weird future where everything is imagined by magic sand we taught how to think

Oct 6
AI eye-scanner can tell whether you'll croak it from a heart attack

If and when this hits the mainstream, who's going to trust their retinas to random models?

Oct 6
OpenStack ends requirement for six-monthly upgrades with 'SLURP' plan

As version 'Zed' debuts, project slows down a little

Oct 5
SUSE wheels out first public prototype of its server Linux distro, asks for feedback

Adaptable Linux Platform v0.01 shows that the future of SLE is containerized

Oct 5
Linux 6.1: Rust to hit mainline kernel

New language will be official, probably within a couple of months