• 4 hours ago
As the huge summit on artificial intelligence kicks off here in Paris, one of the world’s top experts has spoken to FRANCE 24 about the pros and cons of the technology, and how it needs to be used responsibly. It comes as France, and co-host India, try to claim some ownership of AI amid intense competition from the US and China. We spoke to Abeba Birhane, founder of the AI Accountability Lab at Trinity College in Dublin, Ireland.

Visit our website:
http://www.france24.com

Like us on Facebook:
https://www.facebook.com/FRANCE24.English

Follow us on Twitter:
https://twitter.com/France24_en

Category

🗞
News
Transcript
00:00So joining us now is one of those taking part in one of the panels at the conference today,
00:04the panel looking at how to ensure the technology is used for the public good in a resilient and
00:08open AI ecosystem. Beba Behane is founder of the AI Accountability Lab at Trinity College
00:15in Dublin. She joins us now. Thanks very much for being with us on the programme. I mean,
00:19on the surface then, what France is trying to do sounds great, doesn't it? Making AI that's ethical,
00:24that's accessible as well. Yeah, absolutely. And spending a huge amount of time, you know,
00:34looking at public AI, artificial intelligence in public interest. And some of that goes to
00:42ensuring that AI systems that are deployed into the world that are integrated into critical social
00:48infrastructure are actually tested and vetted and can do the job that they claim to do.
00:57This is one of the core and important aspects of spending and investing in
01:05artificial intelligence that is in the public interest. Is it really possible,
01:09though, to make it ethical and accountable? How on earth do you police it?
01:14Well, I tend to believe that we should focus on, you know, ensuring that those that are developing
01:23and deploying and the vendors of AI systems, we should make those bodies as accountable as
01:30possible rather than trying to think around making AI systems accountable itself. Because at the end
01:37of the day, AI is human through and through, from the data that's required to train the AI systems,
01:45from the developers and the scientists, from the companies that are, you know, developing and
01:50selling these products. Accountability squarely lies in ensuring that those that are responsible
01:58and those that are behind AI systems are accountable for the models they choose to
02:03develop and they choose to deploy. So the focus has to be ensuring, you know, relevant bodies are
02:10accountable rather than the artefact itself is accountable. And do you feel everybody's on the
02:16same track with that? I mean, there are signs, aren't there, that perhaps some countries,
02:20notably, I'm thinking, of course, of the US under Donald Trump, also China,
02:24are rather less worried about all of that. Yeah, yeah, you are right. Not everybody's on the same
02:31track. And because, you know, AI systems are tend to be seen, often tend to be seen by, you know,
02:38AI companies and tech CEOs and big tech companies and the government itself, AI systems present,
02:46you know, financial opportunity. So as a result, people tend to focus on AI systems themselves,
02:55forgetting that AI and capitalism are inherently interlinked. So the idea of pushing for
03:02accountability of the bodies that are developing and releasing AI is not something that is shared
03:09by everybody, again, but it's something that we should all focus on. I mean, it is a worry,
03:17isn't it, perhaps, that technology is sort of just running ahead. I mean, initially we had
03:22social media, which at the time, at the beginning, was thought of as a great thing,
03:25and perhaps now a lot of the negativity of social media is being uncovered.
03:30The fear is that the same thing could and will happen with AI.
03:35Yeah, again, like social media, social media has been great in, for example, connecting people
03:41across the world. But again, without the necessary guardrails, without the necessary
03:47accountability mechanisms, we have also seen, for example, Cambridge Analytica is one of the
03:54major examples where we have witnessed democratic processes being manipulated and broken down.
04:04So just like social media can become a force that destroys democracy, AI systems, again,
04:12have to be, those that are developing, there has to be clear regulatory guidelines
04:19that ensures that the AI systems that we are developing are actually likely to benefit
04:26society, not just the handful few tech companies or big tech corporations. And unfortunately,
04:33that's currently what's happening. If you look at Meta, just at the beginning of this year,
04:38they rolled back a lot of the guardrails they had, not just on the social media platform,
04:45but on their AI systems themselves. And if you look at Google, just a few days ago,
04:52they also walked back their promise, their pledge not to use AI for military purposes.
04:58And again, this is why holding these big corporations and AI companies accountable,
05:04that's why it's really important in order to shepherd the AI systems towards societal good.
05:13And what about the overall effect on AI, on the human race, if you like? I mean,
05:17there's been a lot of concern, hasn't there, about the effect on jobs?
05:23Yeah, that's one of the main concerns. And one of the concerns has been that AI is replacing people,
05:31displacing jobs, and so on. And I think something we tend to forget is the fact that
05:39despite what we hear around AI systems being fully autonomous and fully agentic, the reality
05:47is that, again, AI systems, even the most developed state of the art, large language models,
05:55or what's now known as agentic systems, are inherently human through and through. They need
06:03constant human handholding. So even though we tend to hear about AI displacing jobs,
06:11at the end of the day, what's happening is that people are, their jobs have been displaced in
06:17order to kind of, you can call it, to babysit AI systems. So there is still, humans are still
06:25required, for example, to what's known as reinforcement learning via human feedback.
06:32So even after AI systems have been developed and released, there is constant need for humans to
06:38monitor, to assess, and to rate and evaluate these AI systems, so that, you know, again,
06:45they are heading in the right direction. So what I wanted to say is that even though AI might be
06:51displacing some jobs at the end of the day, we still need humans to ensure that AI systems function
06:58the way they are supposed to be. Yeah, I noticed you mentioned humans there. I know you've spent
07:02time as well researching human behaviour as well before you went on to this. Bearing in mind
07:07everything that we've talked about, should we fear AI, do you think, or is it all positive?
07:15That is a complex question. I think the answer is somewhere in between. And again,
07:21I tend to think AI is inherently human through and through, again, from the data that's used
07:28to train it, from the various, you know, gig workers and data workers that ensure that the
07:35data that we are using, the data that we are feeding AI systems is in the right format,
07:40and to ensure that there is no toxic content in the training data set, and the scientists,
07:46and the tech companies themselves. So the idea of, you know, fearing AI itself doesn't really
07:54make sense. What we should fear is, you know, governments, and most importantly, AI companies
08:00and big tech corporations with AI. So that's what we should fear. And as I mentioned, you know,
08:07these companies have become too big to care, too big to regulate it. And, you know, after,
08:14once they have, now that they have this unprecedented power, they can do without
08:20proper guardrails, they are likely to do as they please in a way that makes financial sense to
08:26their corporations. So what we should fear and what we should care about is, again, you know,
08:32powerful people with AI, rather than, you know, AI itself, rather than the artefact itself.
08:38Good to have you on the programme today. Thanks very much for joining us,
08:41Abeba Bahani, founder of the AI Accountability Lab at Trinity College in Dublin. Thanks.
08:46Thank you for having me.

Recommended