Image credit: Ariel Davis / OneZero
Stress and anxiety caused by the ongoing Covid pandemic has spurred a spike of interest in digital mental health tools. AI-powered chatbots, like Woebot and Wysa, offer short-form conversational therapy that the companies claim could mitigate mental illnesses. These apps are gaining popularity, seeing their usage doubling year-on-year. Their rise is also helped by recent federal deregulations on digital mental health and teletherapy tools in response to the pandemic.
Proponents of chatbot therapy argue that the scalability of chatbots can lessen the burden on real clinicians. However, experts are concerned about data privacy and the potential harms of these tools. Although clinical trials that evaluate Woebot’s effectivess are ongoing, there have been no conclusive evidence on the effectiveness of chatbot therapists so far. In any case, chatbots are not about to replace human therapists anytime soon.
Indeed, [experts] agreed that the most promising applications for mental health chatbots and other asynchronous digital tools are in collaboration with real human people.
While AI research has mostly focused on improving the capabilities and performance of AI systems, few question how these AI systems, when deployed in real-world applications, shift and entrench power. Making a “fair” and “unbiased” AI is not enough, since the definitions of these terms are often set by those in power, and just because an AI system is unbiased doesn’t mean its application will be. For example, “in the hands of exploitative companies or oppressive law enforcement, a more accurate facial recognition system is harmful.”
When the field of AI believes it is neutral, it both fails to notice biased data and builds systems that sanctify the status quo and advance the interests of the powerful. What is needed is a field that exposes and critiques systems that concentrate power, while co-creating new systems with impacted communities: AI by and for the people.
Robotic lab assistant is 1,000 times faster at conducting research - Working 22 hours a day, seven days a week, in the dark
Autonomous driving startup turns its AI expertise to space for automated satellite operation - Hungarian autonomous driving startup AImotive is leveraging its technology to address a different industry and growing need: autonomous satellite operation.
The main beneficiaries of artificial intelligence success are IT departments themselves - Artificial intelligence, seen as the cure-all for a plethora of enterprise shortfalls, from chatbots to better understanding customers to automating the flow of supply chains.
“Instagram-like filter” labels molecular details in tumor images - In an image of an individual tumor, a computer recognizes and labels the likely genomic activity of groups of cells based on their appearance.
Breakthrough machine learning approach quickly produces higher-resolution climate data - Researchers have developed a novel machine learning approach to quickly enhance the resolution of wind velocity data by 50 times and solar irradiance data by 25 times - an enhancement that has never been achieved before with climate data.
Deepfakes Are Becoming the Hot New Corporate Training Tool - Coronavirus restrictions make it harder and more expensive to shoot videos. So some companies are turning to synthetic media instead.
Study: Only 18% of data scientists are learning about AI ethics - The neglect of AI ethics extends from universities to industry
An online propaganda campaign used AI-generated headshots to create fake journalists - A network of fictitious authors placed op-eds in conservative outlets
International probe launched into facial recognition firm that scrapes images from the internet - Privacy regulators in the U.K. and Australia have announced a joint probe into Clearview AI’s “data scraping” practices.
Clearview AI stops offering facial recognition technology in Canada - Clearview AI has said it would no longer offer its facial recognition services in Canada, the country’s privacy commissioner announced on Monday, in response to an ongoing investigation into the company by provincial and federal privacy authorities.
In the ‘Blackest city in America,’ a fight to end facial recognition - Activists in Detroit have been waiting a long time for July 24. Since the city’s contract with DataWorks began in 2017, community members have been pushing to stop the software company’s facial recognition services from expanding in their neighborhoods.On that day, Detroit’s $1.
Defund Facial Recognition - I’m a second-generation Black activist, and I’m tired of being spied on by the police.
Police Surveilled George Floyd Protests With Help From Twitter-Affiliated Startup Dataminr - Leveraging close ties to Twitter, controversial artificial intelligence startup Dataminr helped law enforcement digitally monitor the protests that swept the country following the killing of George Floyd, tipping off police to social media posts with the latest whereabouts and actions of demonstrators.
Head of Google AI talks about the lack of inclusiveness in AI - Jeff Dean, Senior Fellow and Senior Vice President at Google, took to Twitter to talk about the lack of inclusiveness in the industry.
The Pentagon’s AI director talks killer robots, facial recognition, and China - Joint AI Center (JAIC) acting director Nand Mulchandani said one of its first lethal AI projects is proceeding into a testing phase now. The Joint AI Center was founded in 2018 to act as the Pentagon’s leader in all things AI, and initially focused on non-lethal forms of AI.
Kai-Fu Lee Gives AI a B-Minus Grade in the Covid-19 Fight - Robots and computer programs can help with social distancing and food delivery, but have been less helpful in developing a vaccine.
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!