Last Week in AI News #17
Comment
digests /

Last Week in AI News #17

OpenAI's big DOTA win, US and EU AI regulation proposals, and more!

Last Week in AI News #17

We are relaunching our digest articles with a new name and format! From now on, we plan to release weekly digests with several “mini-briefs” and links to top AI news.

Mini Briefs

OpenAI’s Dota 2 AI steamrolls world champion e-sports team with back-to-back victories

On Friday, before OpenAI’s latest and last DOTA match, we put out an editorial highlighting the caveats that should be kept in mind with regards to it and DeepMind’s similar efforts on StarCraft. As we predicted, OpenAI’s bots did better than in their last showing and won the day, though it’s not really clear why; sadly, this event did not come with a release of more technical details about the techniques involved. As covered in The Register, there is at least a promise of releasing more details at a later date:

OpenAI have often been criticised for brute-forcing problems, solving them by simply slamming down more computational power. Before OpenAI Five played at The International last year, it had already guzzled 128,000 CPU and 256 Nvidia GPUs. When The Register asked OpenAI how much hardware the bots consumed this time round, a spokesperson told us it was planning to release the statistics at later date and would prefer to describe the hardware in vague and disappointing units of “GPU hours”, rather than a concrete number.

Nitpicks aside, this is an impressive accomplishment that once again demonstrates OpenAI’s ability to push present day AI techniques to their limits. Just keep in mind there’s multiple caveats to this achievement, and its implications on harder classes of AI problems are unclear.

How I Became a Robot in London - From 5,000 Miles Away

Telerobotics - where a human remotely operates a robot, has been around for a long time in areas like surgical and bomb disposal robots, but these interfaces can be clunky and hard to use. Recent advances in haptic gloves for VR and touch sensing for robot hands can greatly improve the realism for robot teleoperation. In this article the author is able to command a robot hand across the Atlantic ocean with a haptic glove to grasp, move, and “feel” objects. While the technology is still in its early stages, the ability for humans to effectively control robots can be very useful for collecting data to train future AI systems.

AI systems should be accountable, explainable, and unbiased, says EU

The EU recently published a set of 7 guidelines that AI systems should meet. These guidelines are generally quite abstract, and they’re not legally binding, although legislations could be made in similar veins in the future. The ethics guidelines tackle issues such as having explainable, robust, and transparent AI systems, as well as data privacy. Amid increasing competitive pressure in AI research and deployment from the U.S. and China, the EU is choosing to shape the future of AI development through AI ethics. While this is an important discussion to have, it is unclear if EU’s proposed guidelines and regulations will have significant influence if the EU does not lead AI developments itself.

The infamous AI gaydar study was repeated – and, no, code can’t tell if you’re straight or not just from your face

In 2017 Stanford published a paper claiming that a neural network can be trained on data of human faces to classify a person’s sexual orientation. This study was controversial because profiling people’s sexual orientation, especially with an imperfect model, is ripe for misuse. The study also did not adequately address the fact that it used pictures from dating websites whose subjects likely highlighted their sexual orientation through certain makeup or headwear, so the neural network did not actually learn anything about faces but rather these intentional secondary cues.

A recent study this article reports tried to replicate the original Stanford study with limited success, and they note that the trained neural network can still identify sexual orientation with over 60% success while the subjects’ faces are completely blurred out. This implies that the network did not learn anything about human faces but rather conditions about lighting or photo contrast that weakly correlate with dating profiles of certain groups of people. This new study however, is also a poorly conducted research project given how they scraped dating websites without consent.

In short, we do not have AI that can tell sexual orientation from faces because it is unlikely that there exists facial features that actually correlate with sexual orientation, and recent research in this area has not been reproducible and all face their own errors in both methodology and ethics.

Advances & Business

Concerns & Hype

Analysis & Policy

Expert Opinions & Discussion within the field

Explainers

Awesome Videos

Favorite Tweet


That’s all for this digest! If you are not subscribed and liked this, feel free to subscribe below!

Leave a comment

comments powered by Disqus
More like this
Last Week in AI #41
Follow us
Get more AI news in your inbox:
x