digests /

Last Week in AI #74

Deep learning's compute limits, invisible AI in healthcare, and more!

Last Week in AI #74

Image credit: Mike Reddy / STAT

Mini Briefs

MIT researchers warn that deep learning is approaching computational limits

An important note about the advancement of deep learning is its reliance on advances in compute. Without many powerful GPUs and similar developments, new, computationally expensive methods such as Neural Architecture Search would not be possible today. Researchers at the Massachusetts Institute of Technology, Underwood International College, and the University of Brasilia conducted a study whose results, they claim, tells us that we are approaching the computational limits of deep learning. The researchers analyzed 1,058 papers from ArXiv, paying attention to the computation used in one pass of each deep learning model studied and the capability of hardware used to train each of those models. Based on the trends they found, the researchers anticipate that without greater efficiency in its use of computational power, training state-of-the-art models will extract prohibitive hardware, environmental, and monetary costs.

An invisible hand: Patients aren’t being told about the AI systems advising their care

The coronavirus outbreak has brought news about advances in medical AI to the fore, showing how researchers are attempting to use state-of-the-art methods for applications such as epidemic forecasting and drug discovery. But AI is also being used for more routine, everyday tasks: since February 2019, AI has played a role in making discharge decisions for tens of thousands of patients hospitalized in one of Minnesota’s largest health systems, but the patients are unaware of the AI’s involvement. This case represents an greater role for AI in everyday healthcare decisions–while some use cases may be productive, it is well known that AI systems can be fraught with bias.

If patients are not informed of AI’s role in their care, its potentially biased decisions may make harmful mistakes without their knowledge. Doctors and nurses who withhold information about the use of AI worry about it derailing conversations with their patients and undermining trust–but these very worries indicate that those healthcare workers should be taking a robust, open approach to evaluating the AI’s usefulness, integrating it into their approach, and informing patients affected by its decisions. However, since disclosure of the use of AI-powered support tools falls into a regulatory gray zone, there is little incentive for hospitals to be completely transparent. Harvard Law School’s Glenn Cohen believes that doctors and nurses should be having frank conversations about the issue of disclosure. Just as he worries, it is likely that patients who find out that AI was used to make healthcare decisions without their knowledge will lose trust in the technology.

Podcast

Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube

News

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

  • Announcing nominees for the second annual Women in AI Awards - The women nominated below have all made outstanding contributions in the AI field, from advancing the work in ethics and fairness in AI, to trailblazing research critical to AI innovation, to ensuring young women entering the field have the opportunity and mentorship necessary to thrive.

That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!

More like this
Follow us
Get more AI coverage in your email inbox: Subscribe
x