Category
š¹
FunTranscript
00:00You are watching the context with me Christian Fraser. It is time for our regular Thursday feature AI decoded
00:13Welcome to the program freely available largely unregulated the creative tools of generative AI now
00:21Amplifying the threat of disinformation. How do we tackle it?
00:25What can we trust and how are our enemies using it to undermine our elections and our freedoms this week?
00:32Governor Gavin Newsom signed a bill in California that makes it illegal to create and publish deep fakes
00:39related to the upcoming election and from next year the social media giants will be required to identify and
00:46Remove any deceptive material. It is the first state in the nation to pass such legislation. Is it the new benchmark?
00:54Some of this stuff obviously fake some of it designed to poke fun
00:58But look how these AI memes of cats and ducks powered the pet eating rumor mill in America with dangerous
01:06consequences
01:07It is a problem too in China
01:09How does the Communist Party retain social order in a world where the message can be manipulated?
01:15Beijing is pushing for all the AI to be watermarked and is putting the onus on the creators and
01:21From politics to branding there is no bigger brand than Taylor Swift
01:26Hijacked by the former president who shared fake images of her fans
01:31Endorsing him it affects us all
01:35With me as ever in the studio our regular commentators and AI presenters
01:40Stephanie Harris here and from Washington our good friend Miles Taylor who worked in national security
01:46Advising the former Trump administration. We'll talk to them both in a second
01:50But before we do that
01:51We're going to show you a short film one of the many false claims that has appeared online in recent months was a story that
01:56Kamala Harris have been involved in a hit-and-run accident in 2011
02:00That story was created by a Russian troll farm and was one of the many inflammatory stories
02:07Microsoft intercepted the threat analysis unit that does their work in New York is at the very forefront in defending all our elections
02:15AI correspondent Mark Chislak has been to see it
02:19Times Square, New York City an
02:22unlikely location for a secure facility which monitors attempts by foreign governments to
02:29Destabilize democracy it is however home to MTAC the Microsoft Threat Analysis Center
02:35Its job is to detect assess and disrupt cyber enabled influence threats to democracies worldwide
02:44The work that's carried out here is extremely sensitive with the very first people that have been permitted to film inside
02:52It's also the first time Russian Iranian and Chinese attempts to influence the US election have all been detected at once
03:00All three are in play and this is the first cycle where we've had all three that we can definitely point to
03:06Individuals from this organization serve on a special presidential committee in the Kremlin
03:12Reports compiled by these analysts advise governments like the UK and
03:16US as well as private companies on digital threats
03:19This team has noticed that the dramatic nature of the US election is
03:24complicating attempts at outside interference
03:27The biggest impact of the switch of President Biden for Vice President Harris has been it's really thrown the Russians
03:35So far off their game
03:36They really focused on Biden as somebody they needed to remove from office to get what they wanted in Ukraine
03:42Russian efforts have now pivoted to undermining the Harris Waltz campaign via a series of fake videos designed to provoke
03:50controversy
03:51These analysts were instrumental in detecting Iranian election influence activity via a series of bogus websites
03:59The FBI is now investigating this as well as Iranian hacking of the Trump campaign
04:04We found that in the source code for these websites
04:07they were doing was using AI to rewrite content from a real place and using that for the bulk of their website and then
04:14Occasionally, they would write real articles
04:17When it was a very specific political point
04:20They were trying to make the third major player in this election interference is China
04:24Using fake social media accounts to provoke a reaction in the US public
04:30Experts are unconvinced these campaigns affect which way people actually vote but they worry they are successful in
04:37increasing hostility on social media marches like BBC News
04:42Yeah, that gives you an idea of just how quick this is advancing Stephanie
04:46Do you do you think we're almost to the point as the technology improves the creative technology that?
04:53We're gonna be very close very soon to not knowing the difference between fact and fiction
04:59It's getting harder and harder to detect a lot of the deep fake imagery
05:03Audio is particularly very difficult to detect. It's a lot easier to fake
05:07So yes
05:08I think we're right now possibly in the last US election where it's kind of easy to see when you're being
05:13Manipulated and the the trick really is do you want to believe it?
05:17Because what this is all about is really hijacking your emotions and watermarks because that is often the go-to
05:24Solution to this. Why would that not be the the answer to all the ills of generated generative AI?
05:31I still wonder if there would be ways of manipulating even that but it's probably a pretty good start
05:36It's just that thing
05:37You always feel like you're playing whack-a-mole with these technologies, you know
05:40You do one thing and then it advances and you have to catch up again
05:43So we would probably start with watermarks and then there would be an advance and a kickback
05:48And we'd have to react to that and so on and so forth
05:50I think it's also about preparing citizens though to have the critical media skills
05:56We all need to be able to construct narratives. Look at who's giving us information and just does it check with reality?
06:05Miles I was saying to Stephanie. This is a good step forward. What's happening, California this week
06:10You've got the governor there putting the onus on the social media companies and on the creative companies to do something about this in
06:18Particularly around the election and then Stephanie said to me well, okay American companies regulated
06:24By American legislators, why wouldn't they just go to China?
06:29Look I mean, I think that's one of the concerns always when it comes to tech regulation and and Christian you remember the debate well
06:36Over encryption in the United States, there was the San Bernardino terrorist attack
06:41you know almost ten years ago now where the FBI could not get into the shooters phone and
06:47It led to a big debate in the United States about these encrypted messaging apps like telegram and signal and whether it should be
06:55Legislated that those were forbidden in the United States
06:58Opponents of those laws though said well sure you can outlaw them here
07:03But someone overseas is going to create the same apps and it's going to be really difficult to prevent people
07:09From using a version of it overseas. We face the same problem here with regulations around deepfake deepfakes and AI
07:17It's only as far as US legislation and law enforcement can reach that those types of things can be enforced
07:24So there is a big challenge here
07:26But also there's a domestic challenge about the First Amendment
07:30Implications and free speech implications and of course go do some signing up that law has opened up that debate as well
07:36So there will be a lot of contention the next few years about how to get this right from a legislative and regulatory
07:42Standpoint the other thing that occurs to me and we talk about protecting children online all the time on this program
07:48One of the issues the company's always come up against is finding the material and getting rid of it
07:54If you are having to find
07:57Very good deepfake material that process becomes much more difficult, doesn't it?
08:03And how do we find a metric to to hold the social media companies and the online companies to task?
08:10Well, I think Stephanie said something really important here
08:13Which was the game of whack-a-mole you're playing if you think that watermarking, you know
08:18Basically sticking up putting a sticker on this content and saying this is fake if you think that's a solution
08:23It's gonna be really hard to keep up a lot of the experts. I talked to an AI say that
08:28Maybe that's a short-term solution
08:30But in the longer term you have to re-architect what's real and what's not real to your earlier point Christian
08:37What do I mean by that? There's a word I want listeners to remember. It's called provenance
08:43There's a big discussion in technology communities about making sure by default
08:48When you do something like capture a picture on your iPhone that it's
08:53Cryptographically signed to say I was taken at this place at this time and that can't be changed
09:00All right, it's tied to a public ledger
09:02Not that people can see your photos publicly, but that's a cryptographic signature. That can't be broken
09:08Eventually all of our tech will be signed with that
09:11Provenance that says I am real and you'll know if it's not real because it won't have that at the point of creation
09:18Certification but it's years before we're there and in the meantime a lot of difficult conversations are going to be had
09:24It's almost a supply chain approach or even a criminal approach when you have a chain of evidence
09:29And you have to be able to follow it all the way through and you can't tamper with it
09:33Or when we had mad cow disease here in the United Kingdom many years ago
09:37People suddenly wanted to know when they were going grocery shopping
09:40They wanted to buy some beef
09:41What farm did it come from and suddenly people realized they needed traceability all the way through the food chain
09:47So I'm wondering if there's a parallel there to help people understand all of the things that you're creating
09:53Can have that encoded so you would always be able to know it's like following through like a painting
09:59When is a painting sold it might go through 50 different hens if it's 400 years old, you know before it finally ends up in the Met
10:07Where did it come from? Was it illegally bought?
10:09You know, etc. You should be able to follow data through in the same way
10:13Let's bring in someone who is working in in this field here in the studio with us is dr. Christian
10:19Schroeder David he is a senior research associate in machine learning at the University of Oxford
10:24He and his team are researching how to identify some of these deep fakes using AI. Welcome to the program
10:32We were just talking about how quickly things are advancing to the point where to the naked eye it's becoming more difficult certainly with imagery
10:41What sort of technology are you developing that makes that easier? Yes, so Christian. I really like this discussion
10:48I think the solution to our problems of
10:51Establishing provenance of content and will involve both a lot of research but also by the adoption of existing technologies
10:58So in terms of research, I think the clip really brought home, you know, that AI is being used to amplify the misinformation problem
11:06So let's use AI to solve it
11:07So some of the research that I do is about using AI to detect misinformation
11:13So you're using the AI to track down the deep fake AI. So basically, yes
11:18So so what I did the summer spending, you know
11:21Doing some research using research with BBC verify and University of Oxford's was just you know, when you have a picture for example
11:28Explain whether it is a deep fake or not. Let's bring one up
11:32I've got one that I think you've looked at and people will be familiar with this. It's
11:37It's the Pope in a puffer jacket, which actually did get into some
11:41Newsstreams around the time that this photo came out. So although we're joking it did actually deceive quite a lot of people
11:47Show me what you did with this. Yeah, so exactly so you can see the Pope in a puffer jacket obviously from the context
11:54It's quite clear. It's a deep fake, right and it's probably for entertainment purposes
11:57but a human expert for example a BBC verify could look at this picture and could
12:02Find with the details that are a bit off
12:04For example, the spectacles seem to be fused into the cheeks or the crucifix doesn't quite attach to the chain, right?
12:10and so the question is
12:12You see it's very important to have these explanations as well
12:15Not just like a number of like this is 0.7 percent or 0.7
12:19Deep fake or not, but you need to have an explanation for why it is a deep fake
12:23So we now have AI tools that can create these explanations as well, right?
12:27And what's something that you put on the desktop something that you could run a photograph through? Yeah, potentially. Yes
12:33Okay, but these tools still have a lot of failure cases and this is where we need more research. Okay
12:39Yeah, where do they fail and why famously it's things like they can't get fingers, right?
12:45So you might get six fingers on a hand. Yes
12:47So this is a classic on videos for example, right like you have some sort of temporal inconsistency
12:53Object disappears suddenly for example, right and but the problem is that these tools are trained on a lot of data and
13:00They're learning so-called features patterns, right that help them to make these decisions
13:05Now it can happen that sometimes these patterns are present in some
13:09Images that are too far away from what it has seen during training to technical. Yeah
13:13Can you explain that to people is it is it a pixel difference? Is it is it in the way?
13:18I mean, it's not in the way the image looks is it?
13:20They're looking the AI presumably is looking deeper into the image than that
13:24yes, so the AI is actually taking an image and then it is
13:30projecting this into some very high dimensional space and within this high dimensional space and basically
13:39You don't do like a dimensionality reduction to a lower space and then in this lower space what you can do is you can
13:46Form these features. Okay, and then if you have an image that it hasn't seen doing training then these features
13:53Might not generalize to that image and then like you can have issues where an image
13:59Evokes some impressions that that are that are wrong, right?
14:02So you see some reflections or something and actually they are not deep fake
14:06But the I think Stephanie Stephanie mentions the photographs that they they struggle with. Yeah, I've got one here
14:12this is Lionel Messi kissing the World Cup much to my chagrin, but but the but the
14:19This one is real
14:22But the machine thought it was fake. Why yes, so
14:26So the machine might think so just because I meant maybe hasn't seen an image
14:31That's close enough to this picture in its training set, right?
14:34And as we always get new images in for example winning the World Cup Messi winning the World Cup was a new occasion
14:39And it might think that for example some reflections and the trophy and all the way
14:44Messi holds his hand and or maybe the skin tone and aren't natural and the problem is we then get these
14:50Explanations and these explanations can be very very convincing and but they are nevertheless wrong
14:56Miles, do you like this idea of AI tracking AI deep fakes? I
15:02Don't just like it Christian. I love it. We've got to use AI against AI to protect ourselves
15:07it's actually going to be our best asset and one of the things that's interesting that's happening right now is
15:13We always focus on who's developing the technology that could be used for bad
15:18But my fellow Oxonian there on set
15:21And in a lot of folks around the world are now investing time and resources into building companies on deep fake detection
15:28I mean there are companies in the United States like true pick and
15:31Reality defender that are exciting their venture backed a lot of people want to go work for them
15:36And what are those companies do they focus solely on trying to prove what is and isn't real?
15:42And one of the things that's just become possible really only in the past few months is some of these technologies are leveraging
15:50Context awareness of the world to determine whether something's fake or real
15:53So these models aren't just looking at the image and saying it looks manipulated
15:57The models can also say well the Pope the past couple weeks has been on vacation and Italy
16:04There's no way this photo was just taken and he was wearing a puffer jacket and they can give you a confidence score
16:10Are you in that's right? Are you incorporating that in your in your service?
16:14Is incorporating wider context on where the content is found and when it is found and who is depicted so sort of semantic information?
16:21Absolutely
16:22Yes
16:22it strikes me that the social media companies in the online companies have a vested interest in this because if you can't tell fact from
16:28Fiction you get what's called a liar's dividend, right that that actually you become a disruptor
16:33You you poison the well so much that actually no one believes anything
16:38And that's not good for a social media model that makes their money from from spreading news and informing people
16:44When it just raises the question of what social media is for
16:48Right, so it was quite exciting at first when it was this new thing and you could stay in touch with your friends
16:52And then a lot of people journalists would use certain tools to keep up with the news and get breaking news fast
16:57but once that starts feeling like actually they're just reading your data or
17:02You're looking for news to get it fast, but it's not actually reliable and it's being flooded the information ecosystems being flooded all the time
17:09Eventually, you might just turn off and that's without even going into the mental health implications of being on these sites, right?
17:15Which we know are really harmful for people
17:17So I wonder sometimes if we might be having lived through the golden age of social media
17:22And we're now entering this new phase
17:24and if it isn't cleaned up
17:26people could just end up leaving it or only going to the way that you would read the National Enquirer in the United States to
17:32Read about aliens or something. Are the big developers interested in what you're doing?
17:36Yes, I'm going to say so the summer my collaboration was with a big tech company
17:40In fact, so there is a lot of interest in these solutions. Actually the interest goes even further
17:45So what we can do now we can proactively try to look for deep fakes and disinformation in social media platforms, right using autonomous agents
17:53So I think this is where things are going and then we can establish this situational awareness on a sort of global scale
17:59Which miles also I've got to also ask you is this the right environment to be developing the right country?
18:04Do you get the support for stuff like this? I think so. Yes. Yeah, generally. Yes. I think you care is a great place
18:11That's encouraging, isn't it?
18:13on that note
18:15one of the problems here is not so much the deep fake news as the disinformation that is spread by
18:20Conspiracy theories who are creating material they believe to be true
18:24What if we could bring the conspiracy theorists from the shadows and back to the light coming after the break?
18:31We'll hear about the AI chat bot that is
18:35Deprogramming the people who have disappeared down the rabbit holes. We'll be right back. Stay with us
18:43Welcome back the moon landings that never happened the kovid microchip that was injected into your arm
18:50the pizza pedophile ring in Washington
18:53Conspiracy theories abound often with dangerous consequences many have tried reasoning with the conspiracy theorists
19:00But to no avail, how do you talk to someone so convinced of what they believe?
19:06Who is equally suspicious of why you would even be challenging those beliefs?
19:10well
19:11Researchers have set about creating a chat bot to do just that it draws on a vast array of information
19:16to converse with these people using bespoke fact-based arguments and the
19:22Debunk bot as it's known is proving remarkably successful joining us on zoom is the lead researcher. Dr. Thomas Costello
19:30He's the associate professor in psychology at the University of Washington. You're very welcome to the program
19:36Tell us what the demystified chat bot does
19:41Yeah, sure. Thanks. I'm happy to be here
19:43So the idea is is that studying conspiracy theorists and trying to debunk them has been pretty hard until now because there are so many
19:50different conspiracy theories out there in the world and you need all of all of this like that you need to look across this
19:56whole corpus of information
19:59Comprehensively to debunk all of them and study them in a systematic way and large language models
20:05These AI tools are perfect for doing for doing just that
20:08so so we ran an experiment where we had people come in and
20:12Describe a conspiracy field theory that they believed in and felt strongly about
20:16The AI summarized it for them and they rated it
20:19I mean then they entered into a conversation with this debunk bot
20:23Where so it was given exactly what they believed and and program set up to to persuade them away from the conspiracy theory using
20:30Facts and evidence and what we found at the end of this about eight minute conversation this back-and-forth was that people
20:37conspiracy theorists reduced their their beliefs and their chosen conspiracy by about 20% on average and
20:43Actually one in four people came out the under at the other end of that conversation
20:47actively uncertain towards their conspiracy
20:50So they were newly skeptical and so is it the basis that they don't know where to go to get this information and they are
20:57suspicious of anybody that might have the answers to
21:01The things that concern them. Yeah, I mean that could be part of it
21:05I think really it's just being provided with facts and information. That's tailored to exactly what and how do you how do you deploy it?
21:13Because I don't I don't I can't imagine that conspiracy theorists are wandering around saying disprove the conspiracy theory that I believe to be true
21:21Yeah, no, I mean that that's a great question
21:23I think it's one that like I'd be curious to hear others answers about too and that in the studies
21:28We paid people to come and do it
21:30That's that I'm optimistic about
21:33You know the truth motivations of human beings in general
21:36I think people want to know what's true and so that if there's a tool that they trust to do that
21:41Then all the better. Yeah miles. Can you see a purpose for this in America?
21:45Yeah, I mean I can certainly see this principle being incorporated into a lot of technology
21:50I mean a lot of us already every day use things like chat GPT and I'll actually give you an example Christian of chat GPT
21:58Disproving something for me. So there's a famous
22:02Winston Churchill quote a lie gets halfway around the world before the truth can get its pants on no quote better describes the
22:10Conversation we're having is how fast this disinformation spreads
22:13Well, guess what? I put that into chat GPT before I did a presentation on this subject. It said hold on a second
22:19It's not actually not a quote from Winston Churchill. It's a quote from Jonathan Jonathan Swift in the 1700s. So AI helped me
22:28Disprove that misinformation that's been around for years. So yes, I think this is important and it should be integrated into these technologies
22:35And Christian is this where the two worlds collide because presumably there are conspiracy theorists who believe something so fervently that they put out
22:43AI generated material as well
22:45So if you can deal if you can deal with the conspiracy theory, maybe you can stop the prevalence of fake material
22:52Yeah, potentially I must say so though that so this study was done in laboratory condition
22:59So it will be very interesting to see whether these results also translate into the real world
23:05and then also the
23:08Large language models that were used there was safety fine-tuned
23:11So that means, you know, they were sort of programmed to say the truth and so on
23:16And so if that safety fine-tuning is not there, you know, and they could be used for something we call interactive
23:22Disinformation so they could be used to convince people of things that are not true
23:26So that's the big risk that I see here and Thomas. I've got a question for you
23:31I'm curious just about how much having good information actually changes people's mind and the example I would give is smoking
23:39We've known for decades smoking is bad for you. Everybody agrees. We've got all the data to back it up
23:44We put labels on it really clearly and yet people still smoke and when you talk to a smoker and try to persuade them to
23:50Give it up because you care about them. They will sometimes really entrench in it's really hard to break not just because it's addictive
23:58But because they maybe want to smoke
24:00so I see this parallel perhaps with conspiracy theories in terms of we have beliefs and
24:06Information is not always enough to change it. It's not just about facts. It's about something else
24:12Yeah. Yeah, that's a great point. I mean, I think that the case of smoking or drug other kinds of drug use
24:17We know that it's bad for us when we start doing it
24:21They're fundamentally not about information whereas beliefs and particularly conspiracy beliefs are often descriptive
24:28they're they're accounts of what went on in the world that uh, you know, al-qaeda didn't
24:33Put together the 9-eleven terrorist attacks. It was the government and and so dealing with
24:39Claims about the world is something that I think is conducive to informational persuasion in a way that that may be
24:45like
24:46Nicotine use is not yeah. I mean must we we focus so much on the legislating. It's it's the questions
24:53I always ask you how far behind the Congress on that. What a state house is doing about AI legislation
24:58But but what we've shown tonight is actually that it's the industry itself
25:03That is that is forcing the change. Maybe it's not legislation because legislation is always one step behind
25:10Well, well Christian, I'm gonna give you an embarrassing admission that proves that point
25:15So I was at dinner last night with one of the creators of chat GPT
25:19GPT-3 one of the earlier versions. She worked for Sam Altman
25:23We were talking about the technology and I complained to her I said, you know
25:26I was teaching a course at University of
25:29Pennsylvania and I got lazy and I was supposed to come up with a list of 25 books on a subject for my students
25:34I said, I'm gonna look it up on GPT. What are the best 25 books?
25:37Produced it emailed it out
25:39Well, guess what?
25:39my students emailed me and said all of those books are fake GPT-3 came up with a bunch of fake books and I said this
25:46To her and she said well, yeah, and that was bad and it gave chat GPT a bad reputation in your mind
25:51and that's why we kept improving the models is we don't want to serve you up false content because you won't want to work with
25:58This product and so that may not be heartening to everyone
26:01But certainly those industry improvements move a lot faster than legislation because there's a business imperative to get it, right?
26:08Yeah, that indeed is the vested interest that I see for a lot of the online companies and and of course the AI companies
26:14That are developing this stuff. We're out of time
26:16It flies by doesn't it and just to remind you that all these episodes are on the AI decoded playlist on YouTube some good ones
26:23On there as well. So have a look at those. Thank you to dr. Schroeder. Dr
26:27Costello miles and of course to Stephanie. Let's do it again. Same time next week. Thanks for watching