Image credit: Rosebud AI
The Covid-19 pandemic is accelerating automation trends in many fields, and fashion modeling is one of them. Digital models have existed for awhile, but until recently they were all hand-crafted. Advances in Generative Adversarial Networks (GANs) and other machine learning techniques for generating realistic human avatars have made digital models a lot more flexible and accessible to modeling agencies and fashion businesses.
While widespread use of digital models can help reduce the carbon footprint of photoshoots and potentially make modeling more inclusive, they also risk making modeling less authentic and diminishing the recent progress human models have made in “chang[ing] the perception that [they] are just a sample size or a prop for clothes.”
In any case, with the Covid-19 pandemic, digital models have become “necessary and needed,” and “it would seem to be only a matter of time until fashion giants jump on board.”
Companies that sell algorithmic job screening tools commonly tout how their product is able to correct for human biases. The argument is that in comparison to humans, algorithms can be more easily “tested and tweaked” against different measures of bias. A lack of thorough peer-reviewed studies on existing algorithmic hiring tools make such claims hard to verify. Further, the industry’s strong emphasis on bias-mitigatation may be hiding other potential issues. For example, predictive hiring companies are offering personality tests that “screen out potential employees who have a higher likelihood of agitating for increased wages or supporting unionization.” Notwithstanding the effects on wage depression such practices may have, personality tests themselves are highly contentious and lack adequate regulations.
The goal of making hiring work better for everyone is a noble one and could be achieved if regulators mandate greater transparency. Currently none of them have received rigorous, peer-reviewed evaluation
Check out our weekly podcast covering these stories! Website | RSS | iTunes | Spotify | YouTube
Wyze will try pay-what-you-want model for its AI-powered person detection - Smart home company Wyze is experimenting with a rather unconventional method for providing customers with artificial intelligence-powered person detection for its smart security cameras: a pay-what-you want business model.
Popular Microsoft Chatbot Xiaoice Gain Independence as a New Company Led by Di Li and Harry Shum - This week, Microsoft announced it would spin off its chatbot business XiaoIce, with all associated technologies licensed to a newly formed independent company. Microsoft says it will maintain an investment interest in the company. Microsoft launched XiaoIce in 2014.
Jibo, the social robot that was supposed to die, is getting a second life - NTT Disruption is keeping Jibo alive
CMU and Facebook AI Research use machine learning to teach robots to navigate by recognizing objects - Carnegie Mellon today showed off new research into the world of robotic navigation. With help from the team at Facebook AI Research (FAIR), the university has designed a semantic navigation that helps robots navigate around by recognizing familiar objects.
Facebook is simulating users’ bad behavior using AI - Facebook’s engineers have developed a new method to help them identify and prevent harmful behavior like users spreading spam, scamming others, or buying and selling weapons and drugs.
These young immigrant brothers are teaching A.I. to high-schoolers for free: We want to give kids ‘a lucky break’ - Twenty-something brothers Haroon and Hamza Choudery, came to the US from a remote village in Pakistan two decades ago. They have started an educational start-up, A.I. for Anyone, to give something back.
Predictive policing algorithms are racist. They need to be dismantled. - Lack of transparency and biased training data mean these tools are not fit for purpose. If we can’t fix them, we should ditch them.
The Microsoft Police State: Mass Surveillance, Facial Recognition, and the Azure Cloud - Microsoft, which has largely escaped criticism, is knee-deep in services for law enforcement, fostering an ecosystem of companies that provide police with software using Microsoft’s cloud and other platforms.
A Nixon Deepfake, a “Moon Disaster” Speech and an Information Ecosystem at Risk - A new video re-creates a history that never happened, showing the power of AI-generated media
The Record Industry Is Going After Parody Songs Written By an Algorithm - Georgia Tech researcher Mark Riedl didn’t expect that his machine learning model “Weird A.I. Yancovic,” which generates new rhyming lyrics for existing songs would cause any trouble. But it did.
GPT-3 Is Amazing - And Overhyped - It is important for the technology community to have a more clear-eyed view of what GPT-3 can and cannot do.
Did a Person Write This Headline, or a Machine? - GPT-3, a new text-generating program from OpenAI, shows how far the field has come - and how far it has to go.
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!