Visit our website:
http://www.france24.com
Like us on Facebook:
https://www.facebook.com/FRANCE24.English
Follow us on Twitter:
https://twitter.com/France24_en
Category
🗞
NewsTranscript
00:00So, it is then about making AI accessible while also making it safe.
00:15It's about making it free of heavy-handed red tape while also giving it a set of rules.
00:20Above all, though, it's about embracing the technology in a positive way.
00:24World leaders and technology executives convening on Paris today for that huge international
00:29artificial intelligence summit, both France and co-host India hoping to be leaders in
00:34the field.
00:35French President Emmanuel Macron was announcing today €109 billion of private sector investment.
00:41That though, of course, as the U.S. and China also lay claim to the AI crown.
00:44Well, we're going to cross straight off to the summit.
00:47We're going to talk to France 24's technology editor, Peter O'Brien, who joins us from there.
00:52Peter, we're starting off with a pretty big announcement then from Emmanuel Macron.
00:59Yeah, this is a big bang.
01:02He compared it directly to the €500 billion Stargate project announced by Donald Trump
01:08a couple of weeks ago.
01:10And €109 billion, if it does materialize, would be a significant investment for France.
01:16Compare it to VivaTech, the tech salon here in Paris last year, where Macron announced
01:21a €400 million investment.
01:25So we're talking more than 20 times that size.
01:28The money we know so far is coming from the UAE in large part.
01:33€30 to €50 billion have already been announced by the Emirates into a data center that they
01:40say will be the largest in Europe for the purposes of AI.
01:45Add to that Brookfield, that's a Canadian investment manager.
01:50They are putting in another €20 billion also into a data center in the north of France.
01:56Why would these, the rest of the money is still yet to figure out the details.
02:01Macron pointed to some of the usual suspects, Iliad, Orange and Thalers, massive French
02:09companies that could be making investments.
02:11The data centers in France can take advantage of abundant non-carbonated, carbonized power
02:18here to keep them running.
02:20That's the advantage for these investors.
02:22Meanwhile, Peter, there's an awful lot more going on at the summit behind you as well,
02:26isn't there?
02:29There is an awful lot.
02:31One of the things yet to be confirmed is JD Vance, the American vice president.
02:37Is he going to make a speech today?
02:39If he does, it might prick up the hair of some people here from the European Commission,
02:48some French politicians here, because as we know, he's criticized the EU in the past
02:53for trying to over-regulate and clamping down on what he calls free speech.
03:00Elsewhere, Emmanuel Macron is meeting at the Elysee with the Chinese vice premier.
03:09This is interesting because in the last six months or so, the Chinese have pivoted quite
03:14a lot towards AI safety and controlling the risks of AI, while this summit and Macron
03:21in particular have been kind of accused of taking risk a little bit out of the equation
03:26in favor of investment instead.
03:28But if you listen to Macron's speech last night, you would see it was very much one
03:32of his classic en même temps speeches, where he both talked up the prospects of investing
03:37in selling France and French companies, but also talked up the risk of AI to democracy.
03:43Peter, thanks very much.
03:45Peter O'Brien, our technology editor, joining us there from the summit in the center of Paris.
03:51Well, joining us now is one of those taking part in one of the panels at the conference today,
03:55the panel looking at how to ensure the technology is used for the public good in a resilient
03:59and open AI ecosystem.
04:02Beba Behane is founder of the AI Accountability Lab at Trinity College in Dublin.
04:07She joins us now.
04:08Thanks very much for being with us on the program.
04:10I mean, on the surface, then, what France is trying to do sounds great, doesn't it?
04:14Making AI that's ethical, that's accessible as well.
04:17Yeah, absolutely.
04:20And spending a huge amount of time, you know, looking at public AI, artificial intelligence
04:29in public interest, and some of that goes to ensuring that AI systems that are deployed
04:36into the world, that are integrated into critical social infrastructure, are actually
04:41tested and vetted and can do the job that they claim to do.
04:48This is one of the core and important aspects of spending and investing in artificial intelligence
04:57that is in the public interest.
04:59Is it really possible, though, to make it ethical and accountable?
05:03How on earth do you police it?
05:06Well, I tend to believe that we should focus on, you know, ensuring that those that are
05:13developing and deploying and the vendors of AI systems, we should make those bodies as
05:20accountable as possible, rather than trying to think around making AI systems accountable
05:27itself.
05:28Because at the end of the day, AI is human through and through, from the data that's
05:34required to train the AI systems, from the developers and the scientists, from the companies
05:39that are, you know, developing and selling these products, accountability squarely lies
05:46in ensuring that those that are responsible and those that are behind AI systems are accountable
05:53for the models they choose to develop and they choose to deploy.
05:57So the focus has to be ensuring, you know, relevant bodies are accountable rather than
06:03the artefact itself is accountable.
06:06And do you feel that everybody's on the same track with that?
06:08I mean, there are signs, aren't there, that perhaps some countries, notably, I'm thinking,
06:12of course, of the US under Donald Trump, also China, are rather less worried about all of that?
06:18Yeah, yeah, you are right.
06:20Not everybody's on the same track.
06:23And because, you know, AI systems are tend to be seen, often tend to be seen by, you
06:29know, AI companies and tech CEOs and big tech companies and the government itself,
06:34AI systems present, you know, financial opportunity.
06:39So as a result, people tend to focus on AI systems themselves, forgetting that AI and
06:48capitalism are inherently interlinked.
06:51So the idea of pushing for accountability of the bodies that are developing and releasing
06:57AI is not something that is shared by everybody, again, but it's something that we should all focus on.
07:06I mean, it is a worry, isn't it, perhaps, that technology is sort of just running ahead?
07:11I mean, initially we had social media, which at the time, at the beginning, was thought
07:15of as a great thing.
07:16And perhaps now there are a lot of the negativity of social media is being uncovered.
07:21The fear is that the same thing could and will happen with AI.
07:26Yeah, again, like social media, social media has been great in, for example, connecting
07:32people across the world, but again, without the necessary guardrails, without the necessary
07:38accountability mechanisms, we have also seen, for example, Cambridge Analytica is one of
07:44the major examples where we have witnessed democratic processes being manipulated and
07:54broken down.
07:55So just like social media can become a force that destroys democracy, AI systems, again,
08:03have to be, those that are developing, there has to be clear regulatory guidelines that
08:10ensures that the AI systems that we are developing are actually likely to benefit society, not
08:18just the handful few tech companies or big tech corporations.
08:22And unfortunately, that's currently what's happening.
08:25If you look at Meta, just at the beginning of this year, they rolled back a lot of the
08:32guardrails they had, not just on the social media platform, but on their AI systems themselves.
08:39And if you look at Google, just a few days ago, they also walked back their promise,
08:45their pledge not to use AI for military purposes.
08:49And again, this is why holding these big corporations and AI companies accountable,
08:55that's why it's really important in order to shepherd the AI systems towards societal
09:03good.
09:04And what about the overall effect on AI, on the human race, if you like?
09:08I mean, there's been a lot of concern, hasn't there, about the effect on jobs?
09:13Yeah, that's one of the main concerns.
09:16And one of the concerns has been that AI is replacing people, displacing jobs and so on.
09:25And I think something we tend to forget is the fact that despite what we hear around
09:32AI systems being fully autonomous and fully agentic, the reality is that, again, AI systems,
09:42the most developed state-of-the-art large language models or what's now known as agentic
09:48systems are inherently human through and through.
09:53They need constant human hand-holding.
09:57So even though we tend to hear about AI displacing jobs, at the end of the day, what's happening
10:04is that people are, their jobs have been displaced in order to kind of, you can call
10:11it to babysit AI systems.
10:13So there is still, humans are still required, for example, to what's known as reinforcement
10:21learning via human feedback.
10:23So even after AI systems have been developed and released, there is constant need for humans
10:29to monitor, to assess and to rate and evaluate these AI systems so that, you know, again,
10:36they are heading in the right direction.
10:38So what I wanted to say is that even though AI might be displacing some jobs at the end
10:43of the day, we still need humans to ensure that AI systems function the way they are
10:50supposed to be.
10:51Yeah, I noticed you mentioned humans there.
10:52I know you've spent time as well researching human behaviour as well before you went on
10:57to this.
10:58So keeping in mind everything that we've talked about, should we fear AI, do you think,
11:03or is it all positive?
11:06That is a complex question.
11:08I think the answer is somewhere in between.
11:12And again, I tend to think AI is inherently human through and through, again, from the
11:18data that's used to train it, from the various, you know, gig workers and data workers that
11:25ensure that the data that we are using, the data that we are feeding AI systems is in
11:30the right format to ensure that there is no toxic content in the training data set and
11:36the scientists and the tech companies themselves.
11:40So the idea of, you know, fearing AI itself doesn't really make sense.
11:45What we should fear is, you know, governments and most importantly, AI companies and big
11:52tech corporations with AI.
11:54So that's what we should fear.
11:56And as I mentioned, you know, these companies have become too big to care, too big to regulate
12:01it.
12:02And, you know, after once they have, now that they have this unprecedented power, they can
12:09do without proper guardrails.
12:12They are likely to do as they please in a way that makes financial sense to their corporations.
12:19So what we should fear and what we should care about is, again, you know, powerful people
12:24with AI rather than, you know, AI itself, rather than the artefact itself.
12:29Good to have you on the programme today.
12:31Thanks very much for joining us.
12:32Ebebe Behami, founder of the AI Accountability Lab at Trinity College in Dublin.
12:36Thanks.
12:37Thank you for having me.