• last year

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en
Transcript
00:00 - Well, Harris says AI or artificial intelligence
00:02 has the potential to improve lives,
00:04 but could pose safety and civil rights concerns.
00:07 The US vice president spoke after a White House meeting
00:10 with senior internet executives
00:11 at the cutting edge of current AI developments.
00:14 Let's get more on this story and bring in Will Duffield,
00:16 who's a policy analyst at the Cato Institute's
00:18 Center for Representative Government.
00:21 Will's focus is on the web of government regulation
00:23 and private rules that govern American speech online.
00:26 Good to see you, sir.
00:28 AI, threat or opportunity, what do you think?
00:31 - Oh, I think definitely more of an opportunity
00:34 than a threat.
00:35 So far, everything we've seen,
00:37 while there are capacities for misuse,
00:39 the sky isn't falling,
00:41 and we have a lot of new useful creative tools
00:43 and methods for organizing data.
00:45 - So AI in itself isn't the bad thing, is it?
00:49 It's those who get to use it.
00:51 That's the problem, I suppose, isn't it?
00:53 - That's the problem with every new technology.
00:57 - Do you have any ideas, any guidelines
00:58 as to what should be done?
00:59 'Cause Kamala Harris just seems to say
01:01 to the internet barons, if you like,
01:04 "Well, it's your responsibility.
01:05 "You've got a legal responsibility to get on with this.
01:07 "Make sure it's okay."
01:08 Surely that's not enough, is it?
01:10 - She focuses on the potential risks and harms.
01:14 And obviously the devil is in the details
01:17 of what exactly those are.
01:19 Now, I've been glad that the White House
01:21 seems to be focusing on more practical misuses,
01:26 AI bias, the misuse in fraud and spam,
01:31 rather than kind of sky is falling,
01:34 existential risk concerns about robots taking over.
01:38 So it seems like a reasonable starting point
01:41 for this conversation.
01:42 - It seems very sensible too, doesn't it?
01:43 Because that's precisely what I think we're looking at,
01:45 the possibility that AI is used to amplify the problems
01:50 that we've already identified
01:51 within kind of social networks today.
01:55 - I think so.
01:56 There will be a little bit of an arms race with it
01:58 as again, we see with other new technologies.
02:01 So it will both be used to produce more spam
02:04 and to filter spam on the other side.
02:07 - You have a regard on this, which few of us have,
02:10 'cause it's something you're looking at in a very deep way.
02:14 Can you give us some kind of sense
02:15 of how the positives could be?
02:17 'Cause I can sit here as a doom merchant
02:20 and talk about what could go wrong.
02:21 Give us something to look forward to
02:23 that might be good.
02:25 - We can all have an AI assistant,
02:30 a tool for interacting with all of the systems
02:35 that have seemed to overtake our lives lately.
02:38 These digital systems, corporate systems,
02:41 government systems that we have to engage with
02:43 in order to live modern lives,
02:45 but are very taxing for individuals to navigate.
02:49 And so I think the best or most exciting uses of AI
02:52 come down to it being an interface
02:56 between we and many of the systems we've created,
02:59 but struggle to use very well.
03:01 - So like you say, you could have your own assistant
03:03 who could maybe be your key chain master,
03:06 your password keeper, that kind of stuff.
03:09 And I'm sensing my doom sort of side coming into this
03:13 and saying, what if that gets hacked?
03:15 But you've got someone who could actually be there for you
03:17 and could actually sort of make life easier
03:19 in sort of you negotiating the internet
03:21 and everything you have to do.
03:23 - I think the question of who owns that personal assistant
03:26 then becomes very important,
03:28 whether it's something you're borrowing
03:30 or renting from a platform
03:31 or something that's running on your own machine at home,
03:35 owned and controlled by you.
03:37 Obviously open source or user modifiable AI
03:42 may be abusable in ways that AI moderated
03:46 by someone else isn't.
03:48 But if you don't control the digital assistant,
03:51 is it really your assistant?
03:54 I think that's the big tension going into this
03:56 or going forward.
03:57 - Indeed, I'm feeling that tension very overtly now.
04:00 Can I ask about the future of news
04:02 before we end this interview?
04:04 Obviously the concept of the sort of AI newsreader
04:08 is this something that many people joke about,
04:10 but it's a genuine reality, isn't it, right now?
04:13 I mean, there is an AI newsreader.
04:16 - Well, I think right now we're using
04:18 lots of frankly just much worse algorithmic systems
04:23 to sort and identify news for us,
04:25 whether it's the side tab on our phone
04:28 or what the Twitter algorithm chooses to show us.
04:31 And again, more individualized systems
04:35 for selecting relevant news
04:37 would probably be a benefit given where we are now.
04:40 - And that's one of the things though, isn't it,
04:42 if you think about it,
04:42 'cause you mentioned about sort of what you see
04:44 on social media.
04:45 Who's choosing that algorithm?
04:46 Who's making that happen?
04:47 Why do I see certain things from a certain political agenda
04:51 on my social media that I don't particularly wanna see?
04:53 - It depends on who's operating
04:57 and who owns that algorithm
04:59 that's making those determinations.
05:00 I mean, usually it's on the basis of
05:02 things you might've looked at in the past,
05:04 but again, if it's not your system,
05:07 there's the potential that someone else is prioritizing
05:10 different things than you might.
05:12 And when we look at AI as reflecting us, humanity,
05:17 it immediately begs the question,
05:19 which humanity, which set of values, which us?
05:22 So I think that, again, we'll run that question of values,
05:27 we'll run through all of this
05:29 because it is such a reflective technology.
05:33 - We could talk all night about this, couldn't we?
05:34 Thank you very much indeed, Will,
05:36 for giving us that insight into how this all works.
05:39 Will Duffield there from the Cato Institute.
05:41 AI, it is the future, well, it's the present,
05:43 it's happening right now,
05:44 but obviously where it goes next is the question.
05:46 Will giving us some guidelines as to how it could go,
05:49 showing us some positives, showing us some negatives,
05:51 and of course, giving us as ever a balance view.
05:53 Thank you, sir, for joining us here on France 24.
05:55 We really appreciate it.
05:56 Thank you.
05:57 - Thanks for having me.
05:58 - You're most welcome.

Recommended