digests /

Last Week in AI #87

Inclusive AI, less-than-one-shot learning, and more!

Last Week in AI #87

Image credit: Devin Coldewey / TechCrunch

Mini Briefs

Microsoft and partners aim to shrink the ‘data desert’ limiting accessible AI

While AI applied to vision and speech systems has the potential to help people in numerous ways, it isn’t always the most helpful for people with disabilities, since it is not trained on very much data from those people. To rectify this, Microsoft has teamed with a number of nonprofit partners to pursue projects aimed to develop AI systems that will be helpful and accessible to people with disabilities such as blindness and ALS. Microsoft and its partners plan to train systems with the primary intention of accessibility using data from those with different sorts of disabilities, ensuring the algorithms see the sort of data they will be applied to from the beginning. Developing inclusive AI is complicated, since deployed AI systems today have a built-in sense of what is “normal”, from how people walk to how they use their devices. While the timeframe may be long, investing in systems that can be used by those who defy current AI’s understanding is a worthwhile investment.

A radical new technique lets AI learn with practically no data

AI systems require many images to properly recognize an object, but humans are able to do so given only a few examples, even as babies. In the vein of attempting to elevate AI to human-level ability, researchers think the same sort of learning should be possible for machine learning algorithms. Researchers at the University of Waterloo successfully trained an algorithm to recognize digits while being trained on only 10 images, instead of the full 60,000 in the MNIST dataset. But, there’s a catch: the 10 images had to be carefully engineered so as to contain the same amount of information as the original 60,000; furthermore, that engineering was only possible because the researchers used the k-nearest neighbors algorithm for testing their method, an algorithm that is visual and easily interpretable. While the method shows promise in that the 10 images could theoretically be distilled even further to just 2 or 3 images, extending it to complicated algorithms like neural networks is difficult because neural networks are not easily interpretable, making the process of engineering data difficult. Despite difficulties, continued research in this direction holds promise for making machine learning more accessible to those who do not have large amounts of compute at their disposal.

Podcast

Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube

News

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field


That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x