• last year
Experts describe the use of a person’s images to create deepfakes using artificial intelligence (AI) without their consent as "dehumanising".

Category

🗞
News
Transcript
00:00 So my journey with image based abuse has actually gone on for around 10 years and it started
00:06 off with anonymous perpetrators taking images of me from social media and doctoring them
00:12 into pornographic images and over time this escalated in how severe it got, how graphic
00:21 the images were, how many images there were until it turned into deepfake videos.
00:28 You've probably heard of deepfakes. To put it simply, a deepfake is a video or an audio
00:31 file which has been altered using artificial intelligence. You can use it to reproduce
00:36 someone's voice or to swap faces on a video. The result is fake but comes over as extremely
00:41 realistic. It doesn't take much to create a deepfake video, all you need is a few pictures.
00:49 According to experts, deepfakes have soared tremendously in the last few years. So why
00:53 is this technology particularly bad news for women? Well, it's been weaponised to create
00:58 pornographic video content. In 2019, DeepTrace published an alarming report. This AI specialised
01:05 company estimated that porn content makes up 96% of deepfakes. And what's shocking is
01:11 that the vast majority of these videos were created without the consent of the featured
01:15 person. When it all started on internet forums in 2017, you needed serious skills to create
01:22 deepfake videos. The main targets were celebrity women due to the wide breadth of public
01:27 available pictures. As this technology becomes increasingly available today, any woman is
01:33 at risk of becoming a potential target. This creates a risk of serious psychological and
01:37 social repercussions. Well, it's horrifying and shocking to see yourself depicted in a
01:43 way that you didn't consent to. This is violating and this is dehumanising and the reality is
01:55 that this could impact a person's employability. This could impact a person's interpersonal
02:03 relationships and mental health. One of the most disturbing trends I see on forums of
02:10 people making this content is that they think it's a joke or they don't think it's serious
02:15 because the outputs aren't hyper realistic, not understanding that for victims, this is
02:20 still really, really painful and traumatic. If some creators make deepfakes as a joke,
02:28 others have malicious intentions. To tackle this, the FBI published a warning in June
02:33 this year regarding deepfakes being used for extortion purposes. Deepfakes have also been
02:39 used to discredit the work of women. For example, investigative journalist Rana Ayyub and activist
02:45 Kate Isaacs were both the victims of smear campaigns which weaponised deepfakes. One
02:51 of the big issues is that it's extremely difficult to control or even to detect deepfakes. We
02:56 want to tell people that they can do something to detect fakes or to fight back against this
03:01 problem. But the problem with that approach is that realistically, the individual and
03:08 the naked eye of the individual is just not going to be a reliable marker for spotting
03:15 fakes, you know, even now, but particularly moving into the future as the outputs get
03:19 better in quality. Around the world, only a few governments have taken up the issue
03:24 at a legislative level. So while the boom in artificial intelligence has been fun for
03:28 a lot of people, it represents serious risks for women online. But for now, there's still
03:34 a long way to go as lawmakers must catch up with technological innovations. There has
03:39 to be some sort of global response from governments, law enforcement, from people on the ground,
03:49 victim communities, so that there is accountability. Accountability for people where they can't
03:56 just ruin someone's life and get away with it and face no repercussions.
04:03 [BLANK_AUDIO]

Recommended