AI systems have huge potentials to improve businesses, and McKinsey estimates that AI technology will add $13 trillion dollars per year to the global economy by 2030. However, because most businesses are relatively new to AI, they are not well prepared for avoiding and mitigating the risks of AI, which can arise from “the development of AI solutions, from their inadvertent or intentional misapplication, or from the mishandling of the data inputs that fuel them.” Poorly deployed AI systems can have grave consequences from discrimination and regulatory backslashes to loss of human life (i.e. self-driving cars or misdiagnosing diseases).
The article recommends businesses to clearly identify and prioritize organization-specific risks, make company-wide AI policies governing data collection and model usage transparent and easy to understand, and lean toward developing AI models that achieve the right balance of predictive power and explainability. While these suggestions are just a start, organizations need to take the risk of AI seriously in order “to avoid ethical, business, reputational, and regulatory predicaments.”
As we have covered in the past, Deepfakes are an emerging if not yet particularly fully formed worrying application of AI. Given the potential harm these algorithms might have, a reasonable question might be why research labs and companies are developing and democratizing these algorithms in the first place. The answer is that, like most technologies with worrying potential uses, the same algorithms have a myriad of possible positive impacts.
This article highlights several companies already endeavoring to bring about these positive impacts, such as Lyrebird (“a company that creates digital voices that mimic actual speakers, [and] is cloning the voices of people with ALS in order to allow them to continue communicating once they can no longer speak”), Synthesia (“a new startup co-founded by a former Stanford professor, can convincingly dub videos into new languages”), and Dzomo (“which wants to replace expensive stock photography with deepfake images”). It should also be noted that companies such as DeepTrace are already working to counteract the harmful uses of deepfakes; to follow the latest developments on this front, their Tracer newsletter is a good resource.
That’s all for this week! If you are not subscribed and liked this, feel free to subscribe below!