MICROSOFT is calling on Congress to pass a comprehensive law to crack down on images and audio created with artificial intelligence – known as deepfakes – that aim to interfere in elections or maliciously target individuals.
Noting that the tech sector and nonprofit groups have taken steps to address the problem, Microsoft president Brad Smith on Tuesday (Jul 30) said, “It has become apparent that our laws will also need to evolve to combat deepfake fraud.” He urged lawmakers to pass a “deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
The company also is pushing for Congress to label AI-generated content as synthetic and for federal and state laws that penalise the creation and distribution of sexually exploitive deepfakes.
The goal, Smith said, is to safeguard elections, thwart scams and protect women and children from online abuses. Congress is currently mulling several proposed bills that would regulate the distribution of deepfakes.
“Civil society plays an important role in ensuring that both government regulation and voluntary industry action uphold fundamental human rights, including freedom of expression and privacy,” Smith said in a statement. “By fostering transparency and accountability, we can build public trust and confidence in AI technologies.”
Manipulated audio and video technology has already created some controversy in this year’s campaign for US president.
In one recent instance, Elon Musk, owner of the social media platform X, shared a altered campaign video that appeared to show Democratic presidential candidate, Vice-President Kamala Harris, criticising President Joe Biden and her own abilities. Musk did not clarify that the video had been digitally manipulated and suggested later that it was intended as satire. BLOOMBERG