digests /

Last Week in AI #23

Retraining workers, commercializing robots, and more!

Last Week in AI #23

Image credit: James Vincent / The Verge

Mini Briefs

Amazon Says It Will Retrain Workers It’s Automating Out of Jobs. But Does ‘Upskilling’ Even Work?

A piece evaluating Amazon’s ‘Upskilling 2025’ initiative, which involves investing $700 million to retrain one-third of its U.S. workforce (100,000 employees) to have needed skills as other jobs become automated. According to The Wall Stree Journal,

“Amazon said it would retrain 100,000 workers in total by expanding existing training programs and rolling out new ones meant to help its employees move into more-advanced jobs inside the company or find new careers outside of it. The training is voluntary and mostly free for employees and won’t obligate participants to remain at Amazon, the Seattle-based company said.”

The articles notes several articles have covered the failings of retraining programs in the past, but also covers reasons for why Amazon’s effort may work better: the company is retraining its own workers for jobs it knows will exist, and can direct its training efforts more intelligently. It’s also not a charitable effort, but rather a calculated investment in its own operations. Still, the lack of participation of the workers themselves in creating this program is problematic. The article concludes:

“Retraining will likely indeed work for some—but probably not many, and probably not those who need the jobs the most. Which is precisely why the only reasonably satisfactory path forward involves giving workers a seat at the table.”

How YouTube is failing children, and what it means for designing AI-moderated experiences

As covered in multiple recent articles, YouTube’s recommendation algorithm has been leading to children viewing often disturbing content:

“If your child uses YouTube without supervision, they have probably watched an animated video with Peppa Pig weeping as a dentist shoves a needle into her mouth, and then screaming as he extracts her teeth. Or the one where she is attacked by zombies, in the dark. Or the one where Frozen’s Elsa is burned alive. Or the one where a demon makes one of the Paw Patrol commit suicide.”

Google has attempted to address the problem, but its sheer scale has made it impossible to fully resolve. The article’s basic thesis is that YouTube’s basic mission to keep us addicted to consuming more and more content make these sorts of abuses of the system inevitable. We must decide to reject content addiction, rather than hope for algorithms or moderation to make it safe:

“We won’t solve it with an algorithm, or with an extra 2,700 content reviewers clogging up YouTube’s break-rooms. Any real solution will need to pursue a vision that is based on the aspirations and values we hold as a society, rather than being at the mercy of an “Up Next” list of videos developed using set of values that, as a society, we never granted license to.”

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

  • Moralizing AI: Can We Make Machines That Reason Ethically? - A commonly cited doomsday scenario when talking about runaway artificial intelligence is that it won’t know when to quit.

  • Yes, there will be Robots - It’s just what some people might want to do in five years time when, according to this year’s feel-good report on Artificial Intelligence (AI), a fifth of jobs are predicted to be over-taken by computers.

  • How the Transformers broke NLP leaderboards - This post summarizes some of the recent XLNet-prompted discussions on Twitter and offline. Idea credits go to Yoav Goldberg, Sam Bowman, Jason Weston, Alexis Conneau, Ted Pedersen, fellow members of Text Machine Lab, and many others. Any misconfiguration of those ideas is my own.

That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe