digests /

Last Week in AI #40

Biased credit assignment, the future of work, and more!

Last Week in AI #40

Image credit: Anthony Weeks / Stanford HAI

Mini Briefs

Apple’s ‘sexist’ credit card investigated by US regulator

A tech entreprenuer found that the Apple Card gave him 20 times the credit limit that his wife got. Turns out, he wasn’t the only one with Steve Wozniak (co-founder of Apple) saying he had the same issue.

While Hanson’s wife’s credit limit was increased after he raised the issue with Apple, they had no explanation apart from “It’s the Algorithm”. This particular case highlights how biased data, and lack of diversity leads to biased algorithmic decisions. Goldman Sachs provided the below statement when asked for comment by Bloomberg:

“Our credit decisions are based on a customer’s creditworthiness and not on factors like gender, race, age, sexual orientation or any other basis prohibited by law.

Research shows that even when protected factors are not used, algorithms are able to find proxies and can result in discriminatory effects. From the looks of it, it seems highly likely that bias has crept into their system.

when the algorithms involved were developed, they were trained on a data set in which women indeed posed a greater financial risk than the men. This could cause the software to spit out lower credit limits for women in general, even if the assumption it is based on is not true for the population at large.

The only good thing about this situation is that it has been uncovered. Given the companies involved (Apple, Goldman Sachs), this incident will hopefully make regulators take a more active role in tackling the issue of algorithmic bias.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

  • Self-Supervised Representation Learning - Self-supervised learning opens up a huge opportunity for better utilizing unlabelled data, while learning in a supervised learning manner. This post covers many interesting ideas of self-supervised learning tasks on images, videos, and control problems.

  • Teaching a neural network to use a calculator - This article explores a seq2seq architecture for solving simple probability problems in Saxton et. al.’s Mathematics Dataset. A transformer is used to map questions to intermediate steps, while an external symbolic calculator evaluates intermediate expressions.

  • Andrey Markov & Claude Shannon Counted Letters to Build the First Language-Generation Models - As part of IEEE Spectrum’s series on the history of natural language processing, they take a look at how the first language-generation models were built.

  • Federated Learning: Challenges, Methods, and Future Directions - What is federated learning? How does it differ from traditional large-scale machine learning, distributed optimization, and privacy-preserving data analysis? What do we understand currently about federated learning, and what problems are left to explore? In this post, we briefly answer these questions, and describe ongoing work in federated learning at CMU.


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x