False


Download Learnerz IAS app from the Play Store now! Download

$show=search/label/May%202022

 


Deepfake UPSC NOTE

SHARE:

  Why are deepfakes dangerous? Misinformation and Propaganda : Deepfakes can be used to create convincing videos and audio recordings that ...

 Why are deepfakes dangerous?

  • Misinformation and Propaganda: Deepfakes can be used to create convincing videos and audio recordings that appear to show or say things that never actually happened. This can be used to spread misinformation and propaganda, undermine trust in institutions, and manipulate public opinion. For example, deepfakes could be used to make it appear as if a politician said something they never did, or to create fake news stories that go viral.

  • Erosion of Trust: The rise of deepfakes could lead to a general erosion of trust in the media and in online information. 

If people can't be sure whether a video or audio recording is real or fake, they may become more skeptical of all information they encounter online. This could have a negative impact on democracy, journalism, and other institutions that rely on public trust.

  • Harassment and Intimidation: Deepfakes can be used to create malicious content that is designed to harass and intimidate individuals. For example, a deepfake could be used to create a fake video of someone saying something embarrassing or incriminating. This could be used to damage someone's reputation, their career, or even their personal relationships.

  • Financial Crime: Deepfakes can be used to commit financial crimes, such as fraud and identity theft. For example, a deepfake could be used to impersonate someone over the phone or in a video call in order to gain access to their financial information.

  • Undermining Democracy: Deepfakes could be used to interfere in elections and other democratic processes. For example, a deepfake could be used to make it appear as if a candidate has said something they didn't say, or to create fake news stories that could influence voters.

  • Deepfakes are constantly evolving and becoming more sophisticated. This makes it even more difficult to detect and combat them.

Have deepfakes been used in politics?

  • Back in 2020, in the first-ever use of AI-generated deepfakes in political campaigns, a series of videos of Bharatiya Janata Party (BJP) leader Manoj Tiwari were circulated on multiple WhatsApp groups. 

  • The videos showed Mr. Tiwari hurling allegations against his political opponent Arvind Kejriwal in English and Haryanvi, before Delhi elections. 

  • In a similar incident, a doctored video of Madhya Pradesh Congress chief Kamal Nath recently went viral, creating confusion over the future of the State government’s Laadli Behna Scheme.

  • Other countries are also grappling with the

dangerous consequences of rapidly evolving AI technology

  • In May last year, a deepfake of Ukrainian President Volodymyr Zelenskyy asking his countrymen to lay down their weapons went viral after cybercriminals hacked into a Ukrainian television channel.

How did deepfake tech emerge?

  • Deepfakes are made using technologies such as AI and machine learning, blurring the lines between fiction and reality

  • Although they have benefits in education, film production, criminal forensics, and artistic expression, they can also be used to exploit

people, sabotage elections and spread large- scale misinformation.

  • While editing tools, like Photoshop, have been in use for decades, the first-ever use of deepfake technology can reportedly be traced back to a Reddit user who in 2017 had used a publicly available AI-driven software to create pornographic content by imposing the faces of celebrities on to the bodies of ordinary people.

  • Deepfakes can easily be generated by semi-skilled and unskilled individuals by morphing audio-visual clips and images

  • As such technology becomes harder to detect, more resources are now accessible to equip

individuals against their misuse.

  • The Massachusetts Institute of Technology (MIT) created a Detect Fakes website to help people identify deepfakes by focusing on small intricate details. 

  • The use of deepfakes to perpetrate online gendered violence has also been a rising concern

  • A 2019 study conducted by AI firm Deeptrace found that a staggering 96% of deepfakes were pornographic, and 99% of them involved women. 

  • Highlighting how deepfakes are being weaponised against women.

What are the laws against the misuse of deepfakes?

  • India lacks specific laws to address deepfakes and AI-related crimes, but provisions under a plethora of legislations could offer both civil and criminal relief

  • Section 66E of the Information Technology Act, 2000 (IT Act) is applicable in cases of deepfake crimes that involve the capture, publication, or transmission of a person’s images in mass media thereby violating their privacy

  • Sections 67, 67A, and 67B of the IT Act can be used to prosecute individuals for publishing or transmitting deepfakes that are obscene or contain sexually explicit acts. 

  • The IT Rules, also prohibit hosting ‘any content that impersonates another person’ and require social media platforms to quickly take down ‘artificially morphed images’ of individuals when alerted

  • In case they fail to take down such content, they risk losing the ‘safe harbour’ protection — a provision that protects social media companies from regulatory liability for third-party content shared by users on their platforms.

  • Provisions of the Indian Penal Code (IPC) can also be resorted for cybercrimes associated with deepfakes .

  • Sections 509 (words, gestures, or acts intended to insult the modesty of a woman).

  • Section 499 (criminal defamation), and 153 (a) and (b) (spreading hate on communal lines) among others. 

  • The Delhi Police Special Cell has reportedly registered an FIR against unknown persons by invoking Sections 465 (forgery) and 469 (forgery to harm the reputation of a party) in the Mandanna case.

Is there a legal vacuum?

  • The existing laws are not really adequate given the fact that they were never sort of designed keeping in mind these emerging technologies,” says Shehnaz Ahmed, fintech lead at the Vidhi Centre for Legal Policy in Delhi. 

  • She is cautions that bringing about piecemeal legislative amendments is not the solution

  • There is sort of a moral panic today which has emanated from these recent high profile cases, but we seem to be losing focus from the bigger questions.

  • Pointing out a lacuna in the IT Rules, she says that it only addresses instances wherein the illegal

content has already been uploaded and the resultant harm has been suffered; instead, there has to be more focus on preventive measures, for instance, making users aware that they are looking at a morphed image.

What has been the Centre’s response?

  • The Union Minister of Electronics and Information Technology Ashwini Vaishnaw on November 23 chaired a meeting with social media platforms, AI companies, and industry bodies where he acknowledged that “a new crisis is emerging due to deepfakes” and that “there is a very big section of society which does not have a parallel verification system” to tackle this issue. 

  • He also announced that the government will introduce draft regulations, which will be open to public consultation, within the next 10 days to address the issue.

  • The Minister of State for Electronics and Information Technology (MeitY) Rajeev Chandrasekhar has maintained that the existing laws are adequate to deal with deepfakes if enforced strictly. 

  • He said that a special officer will be appointed to closely monitor any violations and that an online platform will also be set up to assist aggrieved users and citizens in filing FIRs for deepfake crimes. 

How have other countries fared?

  • In October 2023, U.S. President Joe Biden signed a far-reaching executive order on AI to manage its risks, ranging from national security to privacy.

  • Additionally, the DEEP FAKES Accountability Bill, 2023, recently introduced in Congress requires creators to label deepfakes on online platforms and to provide notifications of alterations to a video or other content. 

  • Failing to label such ‘malicious deepfakes’ would invite criminal sanction

  • The European Union (EU) has strengthened its Code of Practice on Disinformation to ensure

that social media giants like Google, Meta, and Twitter start flagging deepfake content or potentially face fines

  • Under the proposed EU AI Act, deepfake providers would be subject to transparency and disclosure requirements.

Way forward

  • AI governance in India cannot be restricted to just a law and reforms have to be centred around establishing standards of safety, increasing awareness, and institution building

  • AI also provides benefits so you have to assimilate it in a way that improves human welfare on every metric while limiting the

challenges it imposes,” he says. Ms. Ahmed points out that India’s regulatory response cannot be a replica of laws in other jurisdictions such as China, the US, or the EU.

  • We also have to keep in mind the Indian context which is that our economy is still sort of developing. 

  • We have a young and thriving startup eco-system and therefore any sort of legislative response cannot be so stringent that it impedes innovation” she says.


COMMENTS

Name

Amritsar,1,April 2024,301,Art & Culture,11,August 2023,251,August 2024,400,Courses,7,Daily Current Affairs,51,December 2023,189,December 2024,76,Disaster Management,2,Environment and Ecology,323,February 2024,228,Foundation Course,1,Free Class,1,GDP,1,GEMS Club,1,GEMS Plus,1,Geography,311,Govt Schemes,2,GS 2,1,GS1,56,GS2,454,GS3,291,GS4,1,GST,1,History,12,Home,3,IAS Booklist,1,Important News,71,Indian Economy,310,Indian History,24,Indian Polity,341,International Organisation,12,International Relations,260,Invasive Plant,1,January 2024,240,July 2023,281,July 2024,375,June 2022,6,June 2023,268,June 2024,324,March 2024,238,May 2022,17,May 2024,330,Mentorship,2,November 2023,169,November 2024,341,Novermber 2024,2,October 2023,203,October 2024,369,Places in News,2,Science & Technology,318,Science and Technology,119,September 2023,205,September 2024,336,UPSC CSE,115,UPSC Tips,4,
ltr
item
Learnerz IAS | Concept oriented UPSC Classes in Malayalam: Deepfake UPSC NOTE
Deepfake UPSC NOTE
Learnerz IAS | Concept oriented UPSC Classes in Malayalam
https://www.learnerz.in/2023/12/deepfake-upsc-note.html
https://www.learnerz.in/
https://www.learnerz.in/
https://www.learnerz.in/2023/12/deepfake-upsc-note.html
true
4761292069385420868
UTF-8
Loaded All Posts Not found any posts VIEW ALL Readmore Reply Cancel reply Delete By Home PAGES POSTS View All RECOMMENDED FOR YOU LABEL ARCHIVE SEARCH ALL POSTS Not found any post match with your request Back Home Sunday Monday Tuesday Wednesday Thursday Friday Saturday Sun Mon Tue Wed Thu Fri Sat January February March April May June July August September October November December Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec just now 1 minute ago $$1$$ minutes ago 1 hour ago $$1$$ hours ago Yesterday $$1$$ days ago $$1$$ weeks ago more than 5 weeks ago Followers Follow THIS PREMIUM CONTENT IS LOCKED STEP 1: Share to a social network STEP 2: Click the link on your social network Copy All Code Select All Code All codes were copied to your clipboard Can not copy the codes / texts, please press [CTRL]+[C] (or CMD+C with Mac) to copy Table of Content