Here we go again with more AI crime prediction for policing

The government of President Javier Milei in Argentina last week announced the creation of an artificial intelligence group within the country's Cybercrime and Cyber ​​Affairs Directorate that will use statistical software to predict crime.

"The mission of the Artificial Intelligence Unit Applied To Security (UIAAS) is the prevention, detection, investigation and prosecution of crime and its connections through the use of artificial intelligence," the Ministry of Security resolution states - as translated by AI.

The UIASS is expected to monitor the public internet and dark web to investigate crimes and identify perpetrators, to conduct image analysis, to analyze security camera video in real time, and to "use machine learning algorithms to analyze historical crime data to predict future crimes and help prevent them," among other responsibilities.

As noted by El País, Argentina's government, in the midst of cutting public agencies, is creating a new public agency to predict crimes, a notion made famous in Minority Report, sci-fi author's Philip K. Dick's cautionary tale about persecuting people for things they haven't done.

However, the Argentine Ministry of Security cites a different source of inspiration: "Countries such as the United States of America, China, the United Kingdom, Israel, France, Singapore, India, among others, are pioneers in the use of artificial intelligence in their areas of government and security forces."

Andrew Ferguson, professor of law at American University in Washington DC, and author of "The Rise of Big Data Policing," told The Register that while he's unfamiliar with specific AI initiatives in Argentina, "It is hard to find any social media predictive policing technologies that actually work at scale."

The forecasts made in the context of predictive policing have been pretty poor, according a report from The Markup last year, which found that crime predictions generated in 2018 by Geolitica (since acquired by SoundThinking) for the Plainfield Police Department (PD) had a success rate of less than half a percent.

New York City has, incidentally, just started testing an AI-based weapon detector system in its subways.

In a social media post, The Center for Studies on Freedom of Expression and Access to Information at the University of Palermo in Buenos Aires expressed concern about the opacity of the Argentine government's technology acquisition and implementation, noting that in the past similar technology has been used to profile academics, journalists, politicians and activists.

Not all predictive software is necessarily inaccurate or rights-trampling. University of Chicago researchers in 2022 claimed they had developed an 90 percent accurate algorithm to predict where crimes will occur a week in advance. The study also revealed that crime in wealthier areas results in more arrests than crime in disadvantaged areas - an institutional flaw more than a technical one.

Academics from Argentina and Europe, however, consider Prometea, a predictive AI system developed by the Public Prosecutor's Office of the City of Buenos Aires, to be a success story. In a 2020 report, authors Juan G. Corvalán and Enzo Maria Le Fevre Cervini claim Prometea "predicts the solution to a judicial case in less than 20 seconds with a 96 percent success rate" and that "it allows the drafting of 1000 judicial sentences related to housing rights in only 45 days; in the traditional way this would take 174 days."

On a related note, a 2021 study [PDF] of AI readiness in five Latin American countries says that IBM, Accenture, and Oracle, asked to develop Prometea, estimated the project would cost $100,000. Corvalán, deputy prosecutor of the Buenos Aires public prosecutor's office at the time, reportedly said that officials instead looked inward and paid an internal developer 33,000 pesos (~$500 given a 2020 exchange rate of 0.015) to make the first version of the software.

A 2021 survey of predictive police and crime control in the US argues that while AI systems can suffer from inaccuracy, high cost, bias, and other limitations, "there is consensus by the majority of the researchers on the importance of predictive algorithms on the policing landscape." ®

Bootnote

A university study just concluded that mentioning "artificial intelligence" in product descriptions makes shoppers distrustful - and even may decrease purchases.

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Sep 9
Python script saw students booted off the mainframe for sending one insult too many

Who, Me? Come and see the violence inherent in the system!

Sep 9
MI6 and CIA using generative AI to combat tech-driven threat actors

Spook bosses use first-ever joint article to bemoan how Russia and China use tech to mess with the world

Sep 9
AI bills can blow out by 1000 percent: Gartner

Preventing that is doable, but managing what happens when AI upsets people is hard

Sep 8
Xockets rockets Nvidia: Blackwell debut threatened by DPU patent claims

GPU giant accused of colluding with Microsoft, RPX to sideline startup

Sep 7
What do Uber drivers make of Waymo? 'We are cooked'

Feature Robotaxis, low pay may be end of the road for app cabbies in San Francisco

Sep 7
Google says replacing C/C++ in firmware with Rust is easy

Not so much when trying to convert coding veterans

Sep 6
Defense AI models 'a risk to life' alleges spurned tech firm

In-depth Chatterbox Labs CEO claims Chief Digital and Artificial Intelligence Office unfairly cancelled a contract then accused him of blackmail