During a House Energy and Commerce Committee hearing, Rep. Tony Cardenas (D-CA) questioned witnesses about disinformation online, and threats to US society.
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Category
🗞
NewsTranscript
00:00 for five minutes for questions.
00:03 Thank you, Mr. Chairman.
00:04 I appreciate the opportunity for us to come together like this
00:06 and have this very, very important hearing.
00:08 It's affecting millions and millions
00:11 of Americans every single day.
00:13 And we hear the horror stories of the negative effects
00:16 of what's happening, especially when it comes to little children
00:19 and communities across America.
00:21 We're standing at a very unique point in American history
00:24 where a number of technologies are converging
00:26 to create a digital landscape that could prove
00:29 to be unfriendly to democracy and the public.
00:33 Advances in generative artificial intelligence,
00:35 as well as the rolling back of social media content moderation
00:38 policies, are opening the door to a boom in mis-
00:41 and disinformation in our information ecosystem.
00:45 Americans who are invested in the least by social media
00:50 platforms will invariably bear the brunt of this.
00:54 We've seen this play out in the past two decades and cycles
00:57 of our elections, where Americans
00:59 whose primary language is Spanish and other languages
01:02 were exposed to higher levels of false information online.
01:06 That includes inaccurate information
01:08 about access to reproductive health, vaccine safety,
01:12 and election integrity.
01:14 Rampant mis- and disinformation serves to weaken democracy
01:18 and embolden our adversaries abroad,
01:20 as well as radical elements here in our own country.
01:23 It has a real-world effect on public health.
01:26 With a major U.S. election looming this fall,
01:30 we need to be paying attention.
01:32 There needs to be accountability from the platforms
01:35 that we trust to connect Americans
01:37 with each other and the world --
01:40 with the world around them to ensure that information
01:42 is designed to harm them, not allowed to spread wildly
01:46 in the name of driving engagement
01:48 and record-breaking profits.
01:50 That accountability also needs to lead to equitable investments
01:56 in fighting mis- and disinformation
01:58 in languages beyond English.
02:01 Dr. Franks, in your testimony,
02:02 you talk about how the protections the tech industry
02:05 currently enjoys because of Section 230
02:08 have resulted in a warped incentive structure
02:11 that can create profit at the expense
02:14 of tremendous harm to people.
02:16 I've sent multiple letters to online platforms
02:18 with my colleagues highlighting these platforms'
02:21 lack of investment in Spanish-language
02:23 content moderation.
02:25 And while the responses we get are sometimes
02:28 receptive to the problem, we don't see a follow-through
02:31 of an investment on acting on it.
02:33 As things currently stand, do social-media platforms
02:37 have any incentive to seriously invest
02:39 in Spanish-language content moderation
02:42 outside of a fear of public shaming?
02:45 -I would say that, unfortunately,
02:46 the answer is probably not much.
02:48 Public shaming can do a little bit,
02:50 but we've already seen that in some of the documents
02:53 and the conversations that have been revealed
02:55 by whistleblowers and others,
02:57 that tech officials often openly talk amongst themselves
03:00 about how, "Oh, there's a new scandal.
03:02 We're probably going to get called before Congress.
03:04 We're going to be asked some embarrassing questions,
03:06 and then everybody's going to move on.
03:08 We'll go back to making money."
03:09 So I think, given the way that Section 230
03:12 has clearly been read and interpreted for these companies
03:16 as essentially guaranteeing them,
03:18 you won't have to face the consequences of your actions.
03:21 You end up with a perfectly rational but terrible situation
03:24 where profit-seeking companies
03:26 think they can expand their enterprises,
03:29 they can offer all of these services
03:31 without offering any of these protections,
03:33 as you're pointing out.
03:35 -What would an incentive structure look like
03:37 that would produce a reasonable investment
03:39 in non-English-language content moderation
03:41 on social-media platforms?
03:43 -I think, at a minimum, you would have to really restrain
03:46 the definition and the interpretation of C1, right?
03:49 The provision that's essentially saying
03:51 you are not responsible, you, Facebook,
03:53 or whoever the company is,
03:54 you're not responsible for these issues.
03:56 That is, as I've mentioned before,
03:58 being used to defend against any number of claims
04:01 that are really far beyond anything contemplated
04:03 by Section 230, I think, in 1996.
04:07 And really, what you would need to show
04:08 is that if you are causing harm by pushing out a product
04:12 that you have not established appropriate safeguards for --
04:15 for instance, it should be clear that if you are targeting
04:18 and making your product accessible to people
04:21 who do not speak English,
04:23 or that you are offering it outside
04:25 of your company's own chosen language,
04:27 you need to have protections in place
04:29 and linguistic competence in place
04:31 and cultural competence in place
04:33 in order to make that a safe product.
04:35 But if Section 230 is interpreted as saying
04:37 you can simply throw up your hands and say,
04:39 "We just offered a great service,
04:41 and maybe we didn't do it very well,
04:43 and maybe it's not that safe," it's not our responsibility.
04:46 I think that particular interpretation of C-1
04:48 definitely has to be restricted.
04:52 -We don't treat bank robbers like that.
04:54 If somebody drives somebody to a bank robbery,
04:56 we don't say, "Oh, you just drove the car.
04:58 You didn't run in and rob the bank.
05:00 The others did."
05:01 You're still held accountable.
05:02 You're involved in that situation.
05:04 You are an integral part of how and what took place.
05:08 And we don't have that for these organizations.
05:10 And if you'll indulge me, I'd just like to ask you
05:13 to give us your written interpretation
05:16 of what a cyber-civil-rights bill should look like
05:19 or some of the elements thereof.
05:21 Thank you very much. Thank you, Mr. Chairman.
05:22 I apologize. I went over my time.