digests /

Last Week in AI #43

China's rule against deepfakes, disinformation heading into the 2020 election, and more!

Last Week in AI #43

Image credit: Alexandra S. Levine, Nancy Scola, Steven Overly, and Christiano Lima / Politico

Mini Briefs

China makes it a criminal offense to publish deepfakes or fake news without disclosure

China has released a new government policy designed to prevent the spread of fake news and misleading videos created using artificial intelligence, otherwise known as deepfakes. Failure to disclose that deepfakes or false information posted online were created by AI is now considered a criminal offense. The rules, to be enforced by the Cyberspace Administration of China, will go into effect on January 1, 2020.

This rule follows in the wake of California’s recent action last month, in which it became the first US state to criminalize the use of deepfakes in political campaign promotion and advertising. Given that Congress and US platforms are in the process of analyzing the potential harm of deepfakes and creating tools to detect them, this rule by China represents a broader move forward in the fight against technology-aided misinformation.

Why the fight against disinformation, sham accounts and trolls won’t be any easier in 2020

The big tech companies have announced aggressive steps to keep trolls, bots and online fakery from marring another presidential election - from Facebook’s removal of billions of fake accounts to Twitter’s spurning of all political ads. Unfortunately, as we approach the 2020 election, disinformation techniques are only growing more subversive and sophisticated, leaving tech companies with difficult and subjective choices on how to combat them.

Politico considers a number of the evolving challenges Silicon Valley faces as it tries to counter such misinformation heading into the election cycle. These include the danger of American trolls, the trickiness of policing domestic content, the fact that bad actors are learning and improving, and the difficulty of labeling information as misleading with certainty.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

  • The Doer and The Clarion Caller - There is an unnecessary drama unfolding on Twitter on “the war” between connectionists and symbolists. This drama is absurd and perpetuated by people whose relevance exists only in the discussion of the “difference” and the “war.”

Explainers

  • Increase model performance by… removing data? - In any given dataset, not every sample will contribute equally to training a machine learning model. This is obvious when you say it out loud. Some data will teach a model a ton. But some will be irrelevant, or redundant, and won’t really move the needle at all.

  • This is how Facebook’s AI looks for bad stuff - The context: The vast majority of Facebook’s moderation is now done automatically by the company’s machine-learning systems, reducing the amount of harrowing content its moderators have to review.

  • arXiv Machine Learning Classification Guide - We are excited to see the adoption of arXiv in the rapidly growing field of machine learning. Given the interdisciplinary nature of machine learning, it is becoming a challenge for our volunteer moderators to keep up with verifying the appropriate categories for machine learning applications.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x