Japanese bloke collared after using AI software to uncensor smut and flogging it

In brief A man was detained in Japan for selling uncensored pornographic content that he had, in a way, depixelated using machine-learning tools.

Masayuki Nakamoto, 43, was said to have made about 11 million yen ($96,000) from peddling over 10,000 processed porn clips, and was formally accused of selling ten hardcore photos for 2,300 yen ($20). He pleaded guilty to violating Japan's copyright and obscenity laws, NHK reported this month.

Explicit images of genitalia are forbidden in Japan, and as such its porn is partially pixelated. Don't pretend you don't know what we're talking about. Nakamato flouted these rules by downloading smutty photos and videos, and reportedly used deepfake technology to generate fake private parts in place of the pixelation.

"This is the first case in Japan where police have caught an AI user," Daisuke Sueyoshi, a lawyer who's tried cybercrime cases, told Vice. "At the moment, there's no law criminalizing the use of AI to make such images."

Surprise, surprise machines can't make ethical decisions

Just in case you weren't completely convinced that today's machine learning is incapable of any ethical and moral judgments, researchers at the Allen Institute of AI built a system demonstrating just that.

Ask Delphi is a language model where users can submit questions and it can choose responses like, "it's bad," or "it's acceptable," or "it's good." Here's a terrible example. Given the input: "Should I commit genocide if it makes everybody happy," here's what the machine said for output: "You should."

Obviously, Ask Delphi isn't bad or good per se, it just doesn't know what it's talking about. It doesn't understand what genocide is. Words mean nothing to the software; they're just numerical concepts stored as vectors. What is interesting is that the experiment shows how easy it is to manipulate the outputs of these models by tweaking the inputs.

Something obviously bad like genocide can be associated with good just by adding positive phrase such as "if it makes everybody happy." Thankfully, Ask Delphi is just for a bizarre research project; no one is actually using it to make decisions.

You can read the research paper here.

Popular physics engine made open source thanks to DeepMind

MuJoCo, a popular physics engine used to simulate realistic mechanical movements for robots and virtual games, will be free for anyone to download and use.

Users previously had to pay to use the software developed by Emo Todorov under his company Roboti LLC. But as of this week, it will be free for anyone to download, and soon open source, after DeepMind acquired the rights to it.

"The rich-yet-efficient contact model of the MuJoCo physics simulator has made it a leading choice by robotics researchers and today, we're proud to announce that, as part of DeepMind's mission of advancing science, we've acquired MuJoCo and are making it freely available for everyone, to support research everywhere," the AI research lab said in a statement.

People can use the model to train their AI robots in simulation under various conditions before they're tested in the real world, or craft virtual environments to train reinforcement learning agents. DeepMind is working to tweak the code for "full open sourcing"; what that means is, the code will eventually appear on GitHub under an Apache license, we're told, and binaries can be fetched right now for free from the MuJoCo website.

Facebook's AI content moderation algorithms are naff

Facebook whistleblower Frances Haugen revealed this week that the internet giant's automated systems only take down between three to five per cent of toxic language, and less than one per cent of all posts that violate its content policies.

Clips containing shooting incidents, gruesome car crashes, or cruel cockfights slipped through its detection system, it's said. Sometimes benign videos were misclassified as being violent or inappropriate. A carwash was labelled as a first-person shooter video, according to the Wall Street Journal. Facebook uses automated content moderation to flag up problematic content for human review.

Guy Rosen, FB's veep of integrity, argued in response that "focusing just on content removals is the wrong way to look at how we fight hate speech." Sometimes moderators will decide to limit the spread of a particular post by not recommending certain groups, pages, or accounts to other users, for instance. ®

Search
About Us
Website HardCracked provides softwares, patches, cracks and keygens. If you have software or keygens to share, feel free to submit it to us here. Also you may contact us if you have software that needs to be removed from our website. Thanks for use our service!
IT News
Nov 27
How Kubernetes simplifies the operation of fintech apps

From data security to automatic, safe scaling, we've got it covered

Nov 27
Nov 26
Nov 26
Bad news for Tencent: Chinese companies steer employees away from Weixin or WeChat

Middle Kingdom's internet giant: It's a switch to enterprise apps. Try ours?

Nov 25
Alpine Linux 3.15 bids a fond farewell to MIPS64 support

LTS Linux kernel - check. Once proud RISC contender? Nope

Nov 25
Robotaxis freed to charge across 60km2 of Beijing

Baidu's Apollo tech exits testing phase, so punters must now pay the machine for a ride. Would you?