digests /

Last Week in AI #36

Interactive fairness games, manipulating cubes, restoring ancient texts and more!

Last Week in AI #36

Image credit: Selman Design / MIT Technology Review

Mini Briefs

Can you make AI fairer than a judge? Play our courtroom algorithm game

Algorithmic fairness came under the spotlight in 2016, when ProPublic published a report showing that COMPAS, an algorithm used to assess recidivism risk for criminals, was twice as biased for blacks compared to whites. This article examines algorithmic fairness and definitions of fairness by walking the reader through an interactive game to improve COMPAS.

Predictions reflect the data used to make them—whether by algorithm or not. If black defendants are arrested at a higher rate than white defendants in the real world, they will have a higher rate of predicted arrest as well. This means they will also have higher risk scores on average, and a larger percentage of them will be labeled high-risk—both correctly and incorrectly. This is true no matter what algorithm is used, as long as it’s designed so that each risk score means the same thing regardless of race.

The article uses the interactive game to show you that there are multiple possible (and defensible) definitions of fairness, and that satisfying multiple definitions at the same time is impossible. Arriving at the right definition of fairness for a given problem is an open debate which cannot be solved. Even human judges are forced to make trade-offs between different definitions of fairness. However, humans judges and algorithmic decision-making systems differ in terms of accountability. Currently, algorithmic decision-making systems like COMPAS are “trade-secrets” that cannot be publicly scrutinized and thus, held accountable.

So what should regulators do? The proposed Algorithmic Accountability Act of 2019 is an example of a good start, says Andrew Selbst, a law professor at the University of California who specializes in AI and the law. The bill, which seeks to regulate bias in automated decision-making systems, has two notable features that serve as a template for future legislation.

The game highlights the difficulties in making decisions with respect to a population and the conflicts and tensions in “fairness”. There is now, regulatory acknowledgement of the problem and encouraging signs that policy will follow. However, there are tough questions to be asked, and caution to be used when dealing with these systems.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

  • Training real AI with fake data - AI systems have an endless appetite for data. Gathering and labeling data is expensive and time consuming, and in some cases impossible. So companies are teaching AI systems with fake photos and videos, sometimes also generated by AI, that stand in for the real thing.

  • Neural nets are just people all the way down - Down the rabbit hole of how much of Machine learning is really powered by machines?

  • Uncertainty Quantification in Deep Learning - While it is not usually possible to guarantee that deep learning models to be absolutely perfect, it could be useful to know how certain they are with their predictions. This, requires models to be aware of their prediction accuracy for a given input.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x