digests /

Last Week in AI #50

AI to mitigate human's impact on the environment, how interpretable AI can make things worse, and more!

Last Week in AI #50

Image credit: Sarina Deb / The Stanford Daily

Mini Briefs

“AI for Good” talk pushes tech usage to mitigate humans’ environmental impact

The third installment of Stanford’s “AI for Good” seminar series discussed leveraging AI and machine learning to mitigate humans’ environmental impact. Stanford assistant computer science professor Stefano Ermon highlighted recent progress in AI research and pointed out that we need to not only consider how to use AI to benefit as many people as possible, but also have representative data and models that provide insight into issues like infrastructure quality, food insecurity, and poverty.

Lucas Joppa, Microsoft’s chief environmental officer, discussed the importance of developing technology to maximize humans’ benefits on Earth’s systems. Joppa’s recent memo, “AI for Earth”, made recommendations about how Microsoft should deploy its technology after investing in AI research. Joppa also created a team within Microsoft focused on answering this question. Ermon and Joppa, in response to questions about furthering the application of AI to environmental protection, both stressed the need for gathering more data about the earth and our impact on it.

Why asking an AI to explain itself can make things worse

Deep learning models have seen a lot of successes in recent years, but how they come about their predictions is largely unknown. This is a problem when such models are used to make decision that impact people’s lives, such as in law enforcement and medical diagnosis. User should be able to understand how predictions are made and have enough information to disagree or rejected automated decision making.

However, recent research into using visualizations to understand a deep learning model its underlying data revealed some striking problems. While tools sometimes helped people spot missing values in data, this usefulness was overshadowed by a tendency to over-trust and misread the visualizations, and in some cases users couldn’t even describe what the visualizations were showing. An online survey of about 200 machine learning professionals found similar confusion and misplaced confidence. Even worse, many participants, despite not understanding the math behind the models, were happy to use the visualizations to make decisions about deploying the models.

Explainable AI researchers today agree that if AI systems are to be used by more people, those people need to be part of the design from the start. In addition, the explanations that AI gives need to be understandable by anyone using it. Previously, the explainable AI movement was dominated by machine learning researchers. Hopefully, with more perspectives from different areas and a human-centered approach, explainable AI can mitigate overconfidence and misplaced trust in AI.

Advances & Business

Concerns & Hype

  • Facial Recognition Startup Clearview AI Is Struggling To Address Complaints As Its Legal Issues Mount - Clearview AI, the facial recognition company that claims to have amassed a database of more than 3 billion photos scraped from Facebook, YouTube, and millions of other websites, is scrambling to deal with calls for bans from advocacy groups and legal threats.

  • Technology created deepfakes–does it have a way to stop them, too? - As the 2020 election nears, the weaponization of information has become a growing concern among members of both political parties. Machine learning technology has made it easy for anyone to manipulate videos of public figures for malicious use.

  • AI License Plate Readers Are Cheaper–So Drive Carefully - The town of Rotterdam, New York, has only 45 police officers, but technology extends their reach. Last year, Rotterdam embraced a newer generation of automated license plate reader (ALPR) technology, software that can discern plates from more or less any conventional security camera. Rotterdam’s supplier Rekor Systems charges as little as $50 a month to read plates from a single camera.

  • Netflix’s “The Circle” Gets One Key Thing Right About A.I. - Part of the show’s novelty comes in the form of an app called the Circle, a “voice-activated” social media platform displayed on TVs around contestants’ hotel rooms. The Circle is more human-powered than the show lets on, and highlights that much of artificial intelligence today is powered by manual, tedious work done by humans.

  • Artificial Intelligence Will Do What We Ask. That’s a Problem. - The danger of having artificially intelligent machines do our bidding is that we might not be careful enough about what we wish for. The lines of code that animate these machines will inevitably lack nuance, forget to spell out caveats, and end up giving AI systems goals and incentives that don’t align with our true preferences.

  • YouTube’s algorithm seems to be funneling people to alt-right videos - How do we know that? More than 330,000 videos on nearly 350 YouTube channels were analyzed and manually classified according to a system designed by the Anti-Defamation League.

  • AI still doesn’t have the common sense to understand human language - While the field of Natural Language Processing (NLP) has made huge strides and machines can now generate convincing passages at the push of a button, recent research demonstrated that the Winograd challenge, a benchmark that evaluates the common-sense reasoning of NLP systems, has made us believe that the field of NLP is farther along than it actually is.

Analysis & Policy

Expert Opinions & Discussion within the field

  • I Know Some Algorithms Are Biased–because I Created One - Creating an algorithm that discriminates or shows bias isn’t as hard as it might seem. A first-year graduate student’s machine learning algorithm for analyzing a survey sent to US physics instructors, using a particular technique, didn’t find any differences between instructors who taught and did not teach programming.

Explainers


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x