Deepfake technology: How and why China plans to regulate it

The story so far:

TonChina’s Cyberspace Administration, the country’s cyberspace regulator, is rolling out new rules, which will take effect on Jan. 10, to limit the use of deep synthesis techniques and curb disinformation. Deep compositing is defined as the use of techniques including deep learning and augmented reality to generate text, images, audio and video to create virtual scenes. One of the most notorious applications of the technology is deepfakes, in which synthetic media is used to swap one person’s face or voice for another. As technology advances, deepfakes are becoming harder to detect. It has been used to create pornographic videos of celebrities, create fake news, commit financial fraud, and more. Under China’s newly imposed guidelines, companies and platforms using the technology must seek consent from individuals before editing their voices or images.

What are deep fakes?

Deepfakes are compilations of artificial images and audio combined with machine learning algorithms to spread misinformation and replace the appearance, voice, or both of real people with similar artificial likenesses or voices. It can create people who don’t exist, it can falsify the words and deeds of real people that they didn’t say or do.

The term deepfake originated in 2017 when an anonymous Reddit user called himself “Deepfakes.” The user manipulated Google’s open source deep learning technology to create and distribute pornographic videos. The videos were doctored using a technique known as face swapping. User “Deepfakes” replaced real faces with celebrity faces. Cybersecurity firm Norton said in a blog post that deepfake technology is now being used for malicious purposes such as scams and hoaxes, celebrity pornography, election manipulation, social engineering, automated disinformation attacks, identity theft and financial fraud.

Deepfake technology has been used to impersonate high-profile figures such as former US Presidents Barack Obama and Donald Trump, Indian Prime Minister Narendra Modi, Facebook CEO Mark Zuckerberg and Hollywood personality Tom Cruise.

What is China’s new policy to curb deepfakes?

The policy requires deep synthesis service providers and users to ensure that any doctored content using the technology is clearly marked and traceable to its source, i.e. South China Morning Post report. The regulation also requires people who use the technology to edit someone’s image or voice to notify and obtain the consent of those involved. When reprinting news produced by this technology, the source can only come from a list of government-approved news outlets. According to the new regulations, deep synthesis service providers must also abide by local laws, respect ethics, and maintain “correct political direction and correct public opinion orientation.”

Why implement such a policy?

China’s cyberspace regulator said it was concerned that the unfettered development and use of deepsynthesis could lead to its use in criminal activities such as online fraud or defamation. South China Morning Post. The country’s latest move aims to curb the risks that could arise from activities offered by platforms that use deep learning or virtual reality to alter any online content. If successful, China’s new policy could set an example and create a policy framework that other countries can emulate.

What are other countries doing to combat deep fakes?

The European Union has updated its Code of Conduct to stop the spread of disinformation through deepfakes. The revised guidelines require tech companies including Google, Meta and Twitter to take steps to combat deepfakes and fake accounts on their platforms. Once they sign up to the code, they have six months to implement their measures. Under the updated guidelines, the companies could face fines of up to 6% of their global annual turnover if found to be non-compliant. Launched in 2018, the Disinformation Code of Conduct brought together global industry players for the first time to work together to combat disinformation.

The code of conduct was signed in October 2018 by online platforms Facebook, Google, Twitter and Mozilla, as well as advertisers and other players in the advertising industry. Microsoft joined in May 2019, while TikTok signed on to the guidelines in June 2020. However, an assessment of the code revealed important gaps, so the Commission issued guidance to update and strengthen the code to close the gaps. The revision process of the standard will be completed in June 2022.

In July last year, the United States introduced the cross-party Deepfake Task Force Act to assist the Department of Homeland Security (DHS) in combating Deepfake technology. The measure directs the Department of Homeland Security to conduct an annual study of deepfakes — assessing the technology used, tracking its use by domestic and foreign entities, and proposing available countermeasures to address the problem.

Some US states, such as California and Texas, have passed laws making it a criminal offense to post and distribute deepfake videos designed to influence election results. Virginia law imposes criminal penalties for the distribution of non-consensual deepfake pornography.

However, in India, there are no legal provisions prohibiting the use of deepfakes. However, specific laws can be enacted against misuse of technology, including copyright infringement, defamation, and Internet felonies.

Does this technology violate privacy rights?

While Canada does not have any regulations against deepfakes, it is in a unique position to lead initiatives to combat deepfakes. In Canada, some of the most cutting-edge AI research is being conducted by the government with many domestic and foreign players. Additionally, Canada is a member and leader of many relevant multilateral initiatives, such as the Paris Call for Trust and Security in Cyberspace, the NATO Cooperative Cyber ​​Defense Center of Excellence, and the Global Partnership on Artificial Intelligence. It could use these forums to coordinate with global and domestic players to develop deepfake policies in different areas.

Source link