digests /

Last Week in AI #45

AI news coverage, Deepfake bots, and more!

Last Week in AI #45

Image credit: Regulations.gov

Mini Briefs

Industry, Experts, or Industry Experts? Academic Sourcing in News Coverage of AI

News coverage of AI is “heavily influenced by industry hype and future expectations,” and under this context, it is expected that sourcing AI researchers directly can give a more nuanced and balanced perspective. This study looks into how AI experts are contributing to AI news coverage, and it contains 3 main findings:

  1. A very small number of high-profile AI researchers account for more than 70% of AI news mentions.
  2. The researchers most often sourced are ones with strong ties to industry and not necessarily the ones highly cited by academic peers.
  3. Overwhelming majority of AI researchers in the news are men.

The authors caution against this seemingly narrow AI reporting:

Given the huge variety of AI research now occurring both in and outside of industry, journalists would be well served to work to develop new and diverse sources for their reporting, including from a wider range of independent academics. Increasing the diversity of sources and story subjects could help provide broader, richer, and potentially more critical insight into the many pressing public problems and opportunities surrounding artificial intelligence.

Deepfake Bot Submissions to Federal Public Comment Websites Cannot Be Distinguished from Human Submissions

In another demonstration of the potential harm deepfake technologies can bring, a researcher trained a language model that can generate fake comments regrading a Medicaid reform waiver. The resesarcher then submitted 1001 fake comments generated by the model to a federal public comment website, stopping when the bot comments “comprised more than half of all submitted comments.”

This is cause for concern because:

When humans were asked to classify a subset of the deepfake comments as human or bot submissions, the results were no better than would have been gotten by random guessing.

Detecting whether a comment is real or fake, at this point, seems very difficult. As such the solution suggested by the author is not to detect generated text, but rather to prevent bots from submitting comments in the first place, through CAPTCHAs or some other authentication scheme.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

Awesome Videos

How Far is Too Far? | The Age of A.I.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x