The eSafety commissioner's office has received its first complaints of children using AI to bully their peers. Internet studies expert Tama Leaver says AI companies behind some of the platforms should begin considering age restrictions to combat the issue.
Category
📺
TVTranscript
00:00 It's deeply distressing seeing your own image being altered in particular ways. It does
00:07 feel like something's being done to you that is out of your control. But it's also the
00:13 fact that it's often your, you know, it's rarely that this is someone you don't know.
00:16 It's more often that your peers are messing with you in a particular way. And that's why
00:20 bullying matters so much. It's rarely, rarely do we bully people we don't know. We usually
00:25 go after someone that we do know. And in a social scenario, that's greatly amplified
00:30 when it's not just words, and it's not just something that disappears, but is in fact
00:34 an image that then gets recirculated by other people. So the humor often comes in this,
00:39 this being distributed by direct messaging or things that it may not be publicly visible,
00:44 but it is very visible to the peer group of that particular individual. And that's where
00:50 the real harm can be done. Anything that can generate harmful images should probably be
00:55 age gated in some particular way. Terms of use suggest that they should all be used by
01:00 adults, but there's nothing preventing young people using them. Now, there are legitimate
01:05 creative uses that young people are exploring as well. We certainly don't want to lock them
01:09 out of those opportunities. But what we do want to do is make sure that this isn't just
01:14 open to anyone for any reason. And I think that's where a sort of a staggered approach
01:18 where different ages get different sorts of access is not a bad thing.
01:23 [BLANK_AUDIO]