• last year
Eric Effron, Editorial Director, NewsGuard Christina Montgomery, Vice President; Chief Privacy and Trust Officer, IBM Jim Steyer, Founder and CEO, Common Sense Media Moderator: Ellie Austin, FORTUNE
Transcript
00:00 (audience applauding)
00:02 - All right, there's a lot of fear
00:04 surrounding artificial intelligence
00:06 that extend beyond security.
00:08 When prompted for information,
00:09 AI has produced hallucinations
00:12 with inaccurate and sometimes even harmful information.
00:15 So in this session, we'll hear from business leaders
00:18 on how their organizations are tackling misuse
00:21 and misinformation.
00:23 Please welcome to stage Eric Efron,
00:25 editorial director at NewsGuard,
00:28 an organization tracking AI-enabled misinformation.
00:32 Christina Montgomery, vice president
00:34 and chief privacy and trust officer at IBM.
00:38 Christina testified in front of the Congress this past May
00:41 to share IBM's recommendations
00:43 for how to set guardrails for AI.
00:45 And Jim Steyer, CEO of Common Sense Media,
00:48 an organization that reviews media and technology content
00:52 with children's safety in mind.
00:54 They'll be interviewed by Fortune's
00:55 deputy editorial director for Live Media, Ellie Austin.
00:59 (upbeat music)
01:01 - Hi everyone, and thank you to our trio of panelists
01:12 for joining us.
01:13 So Jim, I'm gonna start with you.
01:15 We heard then in the introduction
01:17 about the work that you do
01:19 to safeguard children and young people.
01:21 And I know that last month,
01:23 Common Sense Media unveiled its first ever rating system
01:26 for AI products.
01:27 Actually, you say the world's first ever rating system
01:29 for AI products.
01:31 It was a five point scale.
01:34 Some organizations came out of it well, some not so well.
01:37 I'd love to know what Big Tech's response to it was,
01:40 and did you hear from any specific leaders
01:42 about their results?
01:43 - Okay, fair question.
01:44 So number one, nice to see you all.
01:47 I hope you're big Common Sense Media fans.
01:50 I would say that Big Tech's response was,
01:53 this would be Sam Altman, Sundar, and James Winnick at Google,
01:58 Satya Nadella, et cetera.
02:00 Number one, and it's very gratifying to us,
02:02 they all agreed Common Sense should be the group
02:03 that does independent third party ratings.
02:05 That was a big deal because it matters
02:07 that they understand we're gonna do this
02:10 just like we've done it
02:11 for all other forms of media and tech.
02:13 Second, some of them did not like the grades they got
02:16 in their first rating.
02:18 Because we were pretty tough on issues
02:22 that related to transparency, fairness, efficacy.
02:26 It's very different than rating a movie or a TV show.
02:28 So I definitely heard from the CEOs
02:31 of several of the companies that we rated,
02:32 which to me is a very good sign,
02:35 means we're getting their attention.
02:36 - So what comes next?
02:38 - Oh, basically we think that this is incredibly important
02:41 for the public at writ large here in the US and globally
02:44 to understand what AI means.
02:46 And in a very basic way,
02:48 understand how it can impact their lives,
02:51 their kids' lives, but also democracy.
02:53 And the bigger frame that I think AI is gonna cast.
02:59 And it's really important, I think,
03:01 the public understands that
03:02 because there needs to be legislative efforts
03:06 to put guardrails in place.
03:07 And I think the ratings also help us understand
03:10 what kind of legislative strategy and guardrails
03:13 need to be in place here in the US and globally.
03:16 - Christina, you've argued for regulation
03:18 to focus on use cases rather than the technology itself.
03:21 Can you elaborate a bit on why that is?
03:23 - Yeah, absolutely.
03:24 I mean, these technologies are very context dependent.
03:28 A recommendation for what restaurant you might enjoy
03:32 is obviously very, very different than a recommendation
03:34 for who you're gonna hire,
03:36 or whether somebody qualifies for a loan and the like.
03:39 So that's been a big point of advocacy.
03:42 That coupled with the fact that the technology
03:44 is evolving and developing so quickly.
03:47 And if you start to try to regulate something
03:50 like the capabilities of the technology,
03:52 your regulation is never gonna keep up.
03:55 So absolutely, we've been focused on directing regulation
03:59 to the highest risk uses of AI
04:01 and to take into consideration the socio-technical concerns
04:05 and the context in which it's deployed.
04:07 - We'll come back to some of that in a second.
04:09 But Eric, I want to come on to you.
04:10 So at NewsGuard, you analyze the trustworthiness
04:13 of news sites and you were doing that long before AI
04:15 was a preeminent issue.
04:17 What happens, what steps do you take
04:20 when you find a website that has repurposed content
04:23 from a top publisher?
04:24 Is there a human that you can get in touch with to speak to?
04:27 What happens next?
04:28 - Yeah, one of the earlier panels, Alan Murray mentioned
04:31 that one of the themes that's emerged
04:33 from the last couple of days is that there's a feeling
04:36 that there needs to be a human in the loop.
04:38 So NewsGuard, we think of ourselves
04:40 as the humans in the loop.
04:42 We're a journalism organization.
04:44 We rate and review websites and we've done about 8,000 now
04:48 in nine different countries and four different languages
04:52 based on some very basic criteria, journalism criteria.
04:56 Does the website run corrections?
04:58 Do they tell you who owns them?
04:59 Sort of nothing very controversial.
05:02 Do they distinguish opinion from news?
05:05 Do they tell you what their agenda is?
05:07 And we developed a ranking, ratings, scores,
05:10 and reviews of these websites.
05:13 The AI angle is that, as we all know
05:16 from the last couple of days,
05:17 and we've all known this before,
05:19 AI can do great mischief and it can do great good.
05:22 We think that it does better good
05:25 when it has good information.
05:27 So our content, both our website ratings
05:30 and also we have over 1,000 debunks
05:32 of very prominent narratives,
05:34 can help teach these tools what to believe
05:37 and what not to believe.
05:38 You know, if you do a Google search,
05:40 is Ukraine run by Nazis, you'll find lots of articles
05:45 that say, yes, Ukraine is run by Nazis.
05:47 They may be from RT and Sputnik
05:50 or far right pro-Russia US sites.
05:53 News Guard can tell you that those are not
05:57 credible news sites or at least you should take
06:00 what they say with a grain of salt.
06:01 So really our tools are meant to sort of make
06:04 the information that's available both to humans
06:08 and to bots more reliable.
06:11 - And what responsibility do you think
06:13 the companies who are creating the bots
06:15 which are repurposing the content have
06:17 in this whole process?
06:18 - Well, I think they have a tremendous amount
06:20 of responsibility because the potential harm
06:23 is really almost can't be understated.
06:26 Coming into an election year, obviously,
06:28 there's a lot of mischief that can be done.
06:31 There already is some very, let me just mention this,
06:35 since the advent of the Israel-Hamas War, October 7th,
06:40 we've identified 75 narratives that are very prominent
06:45 on social media and on websites.
06:48 And with the election coming up, we've already noticed,
06:52 we've already found examples of some mischief
06:54 that's being done with deep fakes.
06:56 So the good needs to be balanced.
07:03 I'm sorry, the AI companies really can be much smarter
07:08 and more aggressive about how they monitor information
07:12 and produce information that actually comes
07:14 from decent vetted sources.
07:16 - And Christina, on that note,
07:18 as we do head into an election year,
07:19 what collaboration do you want to see
07:21 between government, political campaigns, corporate America,
07:25 to try and mitigate what could become pretty chaotic
07:29 as we head up to the end of next year?
07:31 - Yeah, absolutely, I mean, it comes down to trust, right?
07:34 Can people verify what they're seeing?
07:35 And I think that's true with the technology
07:38 even more broadly than in the context
07:39 of elections and misinformation in general, right?
07:42 So it has to be a collaboration with governments
07:46 and companies both having really critical roles to play.
07:49 I mean, for companies, they have to be responsible
07:52 and be held accountable and have accountability mechanisms
07:56 for the AI that they're developing and deploying.
07:58 - What kind of mechanisms?
07:59 - So for IBM, for example, we have a set of principles
08:02 that we've built an internal governance program
08:05 around with respect to AI in terms of where
08:08 we're gonna deploy and use it,
08:10 that it be fair and explainable,
08:14 the steps that we take in order to build
08:17 that trust and the like.
08:19 We develop practices like ethics by design
08:23 throughout our company to hold ourselves accountable
08:26 to those principles, and we work on solutions
08:28 for our clients to do the same so that they have
08:31 the capabilities to do things like generate
08:33 nutrition labels, right, fact sheets associated
08:36 with the AI models that they're putting in context
08:38 and do things like algorithmic impact assessments
08:41 and high-risk use cases and the like,
08:44 and then advocating for policies that support
08:48 sort of that risk-based regulation,
08:52 focusing on the highest-risk uses,
08:54 and being clear in terms of what is high-risk
08:59 and what do we expect from companies
09:01 when they're deploying something like a solution
09:04 in a high-risk space, what does an algorithmic
09:06 impact assessment look like and when is it required?
09:09 So I think it's absolutely a collaboration
09:11 required on the part of both.
09:13 - Jim, do you wanna add something?
09:14 - Yeah, I'm deeply skeptical of what's gonna happen in '24.
09:16 I think it's gonna be a total shit show
09:18 in terms of misinformation.
09:20 I think we'd be honest about that.
09:21 Look, Elon has gutted the trust and safety division
09:24 at Twitter, X, whatever he calls it.
09:27 So has Facebook, right?
09:28 So we know that, remember, we're the biggest advocacy group
09:30 in the field on these issues,
09:32 and they've just gutted their staff.
09:34 I'm not worried about IBM.
09:36 I'm worried about Twitter.
09:37 I'm worried about Facebook and Instagram.
09:40 I'm worried, look, and there's been no regulation
09:42 of those platforms.
09:43 One thing I would say in the context of AI is,
09:46 how do you think we did with social media regulation?
09:49 Terribly, right?
09:50 I run the biggest group in the country on those issues,
09:53 and it's a joke.
09:53 Washington hasn't passed a law about privacy.
09:56 We had to do it in California,
09:58 or about social media regulation
10:00 since Mark Zuckerberg was in kindergarten.
10:01 And so these are really serious issues.
10:04 I think you're gonna have a really, really scary '24 election
10:08 because you have so many different actors,
10:11 both domestic and foreign,
10:12 who are gonna try to interfere with this election
10:15 and put out disinformation,
10:16 and the companies have gutted the staff
10:19 that was supposed to do it.
10:20 And I think we need to call them out by name
10:22 and we need to hold them accountable.
10:23 I don't think the federal government will do anything
10:26 other than jawboning,
10:27 but I think this is an extraordinary moment
10:30 for our democracy, and as a kids' advocate,
10:33 I'm mostly worried about it in terms of young people
10:35 and what world they're gonna inherit,
10:37 but I think we have to be extremely serious about this
10:40 'cause I think '24 here,
10:42 and there are hundreds of elections around the world
10:44 that are also gonna be impacted by disinformation,
10:47 and the ability to regulate that.
10:49 So I'm very, I think we really have to call out
10:52 the key platforms and shame them into behavior.
10:55 - And whose responsibility is it to call out those platforms
10:58 and what does that shaming look like?
11:00 - It's ours.
11:01 I mean, we're the biggest advocacy group, so it's on us,
11:03 but I think it's also on,
11:05 I think it's on folks like IBM to call out their,
11:08 I'm not, as I said, I'm not worried about IBM.
11:10 I'm worried about Facebook, X,
11:13 some of the other platforms that I think could really damage
11:16 the democratic systems in our country
11:18 and leave this country and the world
11:20 with a whole mess on our hands.
11:23 So this is very big stuff,
11:25 and I don't think regulation will come.
11:26 We're very involved in Europe with the new AI regulation
11:29 and privacy, but I think this is a watershed moment
11:34 for our democracy, and we should all be speaking out.
11:37 All of you, whatever field you work in
11:39 should be concerned about this
11:41 because our democracy's on the line in '24,
11:44 and the global democracies are as well.
11:46 - And were you about to say
11:47 you don't think regulation will come in 2024?
11:50 - I think you'll see regulation, AI regulation, no.
11:53 I think you'll see it in California maybe
11:55 'cause you'll see what we do here,
11:56 just like we did with the privacy law in 2018
11:58 and the ballot initiative we ran in 2020.
12:01 So I think you'll see California.
12:02 I think you'll see England pass some stuff this year,
12:05 and the EU just came out with the AI regs this week,
12:08 but Washington, nothing will come out of Congress.
12:11 They can't even get their act together to pass a budget,
12:14 so they're not gonna pass comprehensive AI legislation.
12:17 So I think corporate responsibility on that,
12:20 and I think the public speaking out,
12:22 and the media speaking out,
12:23 and holding some of these platforms accountable.
12:26 - You know, if I could, just to back up,
12:29 Jim, your thought about how we can't really rely
12:33 on the platforms,
12:35 because NewsGuard is a journalism organization,
12:37 we always call for comment
12:38 if we're saying something negative about a platform,
12:41 and if I had a dollar for every time
12:43 that we've got in touch with X, TikTok, Facebook,
12:47 you name it, and say, hey, we've just noticed
12:50 that you have dozens of these AI-generated videos
12:55 on your platform, which falsely claim fill in the blank,
12:59 they say, oh, thank you for letting us know,
13:01 we'll take them down.
13:03 And I think that's not good,
13:05 that they're relying on this little news,
13:07 this little organization in New York
13:08 with 35 employees as their fact-checkers.
13:12 And so that really, to me, sort of exposed
13:15 that there are solutions,
13:16 and we think we're part of the solution,
13:19 but that the platforms themselves really have a lot of work.
13:23 - Just to add, look, you have an election denier
13:25 who's leading in the Republican domination,
13:27 who's lying every day
13:29 about the fundamentals of our democracy.
13:32 So to me, and by the way, X has changed,
13:35 they just let, what's his name, back on the platform,
13:36 that was really smart, Elon.
13:38 - Mike Jones.
13:39 - And I just think you have a situation where,
13:43 as I said, you have a fundamental election denier
13:45 running for president in the United States,
13:48 which to me should be held,
13:49 the platform should not put up there.
13:51 So you're just starting with that,
13:53 but think of the other misinformation that can happen.
13:56 I think it's a very, very critical moment
13:59 for America and also global society,
14:02 and I think tech leaders should be calling out
14:04 their colleagues for failing to do this,
14:07 because I think it's that moment
14:10 and that import to our society.
14:12 - That might be changed.
14:13 - I do.
14:14 - Let's see if anyone has a question in the room.
14:16 Yes, over here, could we get a mic over here, please?
14:17 And could you tell us your name and your company?
14:20 Thank you so much.
14:21 - Hi, I'm Sami Hossaini from AARP,
14:24 but it's a personal question.
14:25 So you say people should speak up,
14:27 media organizations speak up,
14:29 corporate organizations speak up,
14:30 but even in this room, you can't get everybody to agree
14:32 what to speak up about, so how to deal with that?
14:34 There's different points of view.
14:36 It's not that I disagree with you,
14:38 but society at large isn't speaking with one voice,
14:41 so you can scream all you want about people should speak up
14:44 and corporate America should do something
14:45 and brands should do something about it,
14:47 but how do we deal with it?
14:49 - Well, I think there's a couple of examples.
14:50 You've seen all the brands who fled the Twitter platform.
14:54 I think that's really good and they should do it.
14:56 I also think there is something called truth,
14:58 and I don't think that there are multiple versions
15:00 of truth and fact, and I think that companies
15:03 should stand up for what's right,
15:06 and there's no question about what happened
15:08 in the 2020 election or what happened on Jan 6th.
15:11 So companies that go, well, on the one hand,
15:13 on the other hand, that's just a BS response,
15:16 and that's just catering to, I think you sometimes
15:18 have to stand up for bigger principles,
15:20 and I think that on basic stuff around misinformation
15:23 and disinformation, it's not that hard to do it.
15:26 When you're talking about this perspective
15:27 on the situation in Gaza, that's somewhat different,
15:31 but I think on fundamentals of election integrity,
15:34 and by the way, I'm glad you brought up Twitter
15:35 'cause I think Shozi has a big platform
15:38 that he needs to do that influences young people a ton.
15:41 I think that Neil and the people at YouTube
15:43 have to be really careful on this too,
15:45 but fundamentally, the ones to watch,
15:47 I'm scared of are Facebook always
15:49 because they don't do their job,
15:51 and Twitter, they've gutted their trust
15:53 and safety divisions.
15:54 You haven't, but they have.
15:55 - Yeah, and I would just say another thing,
15:57 section 230 reform, so we've been,
16:00 this is social media, this isn't AI,
16:01 but it comes back to social media reform
16:03 that never happened.
16:05 There needs to be a reasonable care standard,
16:07 and that's something you can ask,
16:09 you can write your congressional representatives
16:12 and ask them to advocate for that.
16:14 I mean, I don't know if it'll happen,
16:15 but it's absolutely something that we've been advocating
16:17 for at IBM for years now.
16:19 - And on that note, as we kind of come to the,
16:22 we've got time for one more question.
16:23 I can see Madeline's paddle in the corner,
16:25 and I wanna get one more question in.
16:27 - I'm Ryan Close.
16:29 What's the likelihood of having a AI-generated news source
16:32 that's non-biased?
16:38 - By the way, I'm a complete luddite.
16:40 Why you would ask me, I can barely turn on my phone.
16:42 I would say that's a really interesting question.
16:46 We should ask that to my friend Rita, who was just up here,
16:48 or Sam, or some of the technology people.
16:51 I don't know.
16:52 - I do have a couple thoughts about that.
16:54 So one of the things that makes,
16:57 that marks good journalism versus bad journalism
17:01 is that you don't assume that you've got it right.
17:03 You call for comment, you are skeptical.
17:07 Some would say cynical.
17:09 And so it's hard for me to imagine an algorithm
17:13 calling for comment, for example,
17:14 just to make a sort of a very basic point.
17:17 But there's also the principles
17:20 that sort of underlie good journalism,
17:23 at least theoretically, that you're trying to get it right,
17:27 you're trying to be fair,
17:28 you're trying to not approach it with preconceived notions.
17:32 And so I think between the actual physical work
17:37 that journalists do,
17:38 if you can think of calling for comment as physical,
17:40 and the values that underline good journalism,
17:44 I think it's hard for me to imagine
17:46 like a trustworthy, completely AI-generated, say, website.
17:51 But we'll have to see who the humans in the loop are
17:55 to see how that actually works.
17:58 - Okay, unfortunately, we have to leave it there.
17:59 Thank you for such an impassioned discussion
18:01 to Jim, to Christina, and to Eric.
18:04 Thank you so much.
18:05 - Thank you. - Thank you.
18:06 (audience applauding)
18:07 - Good to see you guys.
18:08 - Thank you.
18:08 I'll go home.
18:09 (upbeat music)
18:11 Okay, so that wraps our mainstay conversations this morning.
18:15 It's now time to head into our strategy sessions,
18:18 and those begin at noon.
18:20 And please do go to the session that you signed up for.
18:23 If you're confused, if you've forgotten,
18:24 ask someone from Fortune and they will help you.
18:27 We'll meet back here at 1 p.m.
18:29 for our lunch and keynote conversation
18:31 with Vinod Khosla and Fortune senior editor, Jeremy Kahn.
18:35 Thank you so much.
18:36 (audience applauding)
18:39 ♪ In your car, the radio up ♪
18:41 ♪ In your car, the radio up ♪
18:43 ♪ We keep trying to talk about us ♪
18:45 ♪ I'm someone you may be, my love ♪
18:47 ♪ I'll be your quiet afternoon crush ♪
18:49 ♪ Be your violent overnight rush ♪
18:51 ♪ Make you crazy over my touch ♪
18:53 ♪ But it's just a supercard of us ♪
18:57 ♪ Supercard of us ♪
19:00 ♪ Oh, it's just a supercard of us ♪
19:05 ♪ Supercard of us ♪
19:09 ♪ So I fall ♪
19:12 ♪ It's a kindness ♪
19:13 - Ladies and gentlemen, we ask your kind cooperation
19:16 in clearing the room once again
19:18 so that we may prepare for the next session.
19:20 Thank you.
19:21 ♪ It's just a supercard ♪
19:25 ♪ 'Cause in my head, in my head ♪
19:27 ♪ I do everything right ♪
19:28 ♪ When you call, when you call ♪
19:30 ♪ I forgive and I ♪

Recommended