Mini Briefs

A new bill would force companies to check their algorithms for bias

US lawmakers have proposed the Algorithmic Accountability Act, which, if passed, would ask the Federal Trade Commission to assess whether AI algorithms and their training data are biased or discriminatory, and whether they pose privacy or security risk to consumers. This is in the same spirit as the EU guidelines on AI we covered last week, but more focused and concrete. To address the potential anti-competitiveness complying with such regulations may introduce, the bill would only apply to “companies that make over $50 million a year, hold information on at least 1 million people, or primarily act as data brokers that buy and sell data.”

‘It’s an educational revolution’: how AI is transforming university life

Universities in the UK like Staffordshire and Bolton College have deployed AI chatbots in recent years that can help students answer questions such as ones about class schedules and homework deadlines. While current AI chatbots are highly limited in their use cases - they have to be preprogrammed to answer specific types of questions - many educators are optimistic about AI for education, citing the potential for AI to supplement teachers and “reduce their administrative workload so they can focus on more creative or theoretical aspect of their courses.”

Killer Apps

Concerns over a new AI “arms race” among the world’s most powerful countries has been brewing for some time now, but this may distract us from a very real danger with AI development: rushing it out before it’s safe and truly ready. It may be tempting to demonstate and deploy new technology fast to remain on the cutting edge, but past experience with computers has shown that can lead to security flaws and bugs, which is even more likely with AI technologies given how difficult they are to fully understand and test. So rather than upping the stakes and pace prematurely, we must acknowledge the challenges of developing and deploying AI technology and make sure to invest in it being done safely and strategically.

“But the emerging narrative of an “AI arms race” reflects a mistaken view of the risks from AI—and introduces significant new risks as a result. For each country, the real danger is not that it will fall behind its competitors in AI but that the perception of a race will prompt everyone to rush to deploy unsafe AI systems. In their desire to win, countries risk endangering themselves just as much as their opponents.”

Advances & Business

Concerns & Hype

Analysis & Policy

Explainers


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

We love hearing from you! Feel free to provide feedback, suggest coverage, or express interest in helping. Or, comment below!
comments powered by Disqus