Earlier this month, Sen. Richard Blumenthal (D-CT) questioned experts on the effects of recent deep fakes of President Biden during a Senate Judiciary Committee hearing.
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:
https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript
Stay Connected
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com
Category
🗞
NewsTranscript
00:00 Thank you very much. Thanks to all of our witnesses for your really excellent
00:05 testimony. Mr. Scanlon, you hypothesize, as we can't know for sure, that the Biden
00:17 deepfake had minimal impact, but we can't be certain what the vote would have been
00:26 but for those calls. And I understand there is an investigation ongoing. The
00:36 Attorney General is conducting it. It's under New Hampshire law. I assume it's
00:42 criminal law as well as civil, but there are no federal remedies. In your view,
00:49 would it be helpful to have criminal penalties under federal law specifically
00:56 aimed at this kind of deception? And I think it was Mr. Coleman suggested that
01:03 criminal penalties could be an effective deterrent, but they have to be really
01:08 more specific and stringent than they are now. Mr. Chairman, I have to agree
01:14 that we truly don't know what the impact was on the New Hampshire presidential
01:20 primary. We only know that we had a good turnout and the results were what
01:26 they were. And we still have an active prosecution going on. The AG in New
01:34 Hampshire has identified a company or companies that participated and an
01:41 individual that is a suspect and and they're moving forward with that. At
01:49 some point, I believe that there is a federal component to this because it's
01:56 going to be a national problem. And I'd like to give a shout out to Kate Conley
02:02 who works with Jen Easterly at CISA. Kate was in New Hampshire on the day of the
02:09 presidential primary and she traveled around to polling places with me to try
02:14 and get a handle on, you know, how big this thing actually was, even though that
02:18 was difficult to determine. But yes, I think that, you know, these things in a
02:26 national election are going to be generated nationally, whether it's
02:32 foreign actors or some other malicious circumstance. And I think we need
02:36 uniformity and the power of federal government to help put the brakes on
02:41 that. Instances that happen locally, certainly government, federal government
02:46 assistance would be helpful, but I think that should remain the prerogative of
02:50 state law enforcement and the Attorney General. With assistance from federal
02:55 authorities where it's appropriate, let me ask you and the other witnesses,
02:59 Senator Hawley and I have proposed a framework which includes an independent
03:06 oversight entity, a set of standards that would be imposed by that entity, a
03:14 requirement for some licensing before models were deployed, testing to assure
03:21 that they were safe and effective, just as the FDA reviews drugs to make sure
03:27 they are safe and effective, and potentially penalties such as we've been
03:33 discussing, as well as export controls to assure that our national security is
03:38 protected. I'm assuming, just for the sake of speed, I'm assuming that all of you
03:47 would agree that some kind of framework like that one makes sense. I actually
03:54 have specific thoughts on that framework. I think it's a good start, but I really
03:57 think it's important that whatever framework we set adopts a, what's called
04:02 a defense in depth approach, right? So we need metadata, watermarking, cryptographic
04:07 hashing, which is a little complicated, but it's invisible watermarks and a hash
04:11 database, kind of like NCMEC, AI detection and AI poisoning. It also needs to cover
04:17 both the generative AI platforms and the online platforms. We need both of those
04:22 folks. We can't just say licensed generative AI companies and leave it at
04:25 that. Honestly, we need government buy-in, generative AI buy-in, platform buy-in,
04:30 journalist buy-in, and then detection companies. And all of those points are
04:34 encompassed by our framework, particularly the watermarking. Yeah, I
04:37 really think watermarking is getting a lot of attention here, and it really
04:41 doesn't solve that much of a problem. You need cryptographic hashing, invisible
04:46 watermarking. That's really important. Mr. Coleman? Yeah, just to add on to that, and
04:50 I think just to unpack two things here. We're talking about watermarking and
04:54 cryptographic hashing, effectively what's called provenance. It's either there or
04:59 it's not. The challenge with that is it presupposes that everybody's gonna
05:02 follow those same rules. All the bad actors will follow the same rules, and
05:06 we've seen time and time again a lot of the applications, whether they're on your
05:09 on your phone, in the App Store, or online, or they're open-sourced, they just
05:14 aren't gonna follow the rules. So we can't expect everyone to say, "Hey, we're
05:17 gonna play nice within this walled garden," when the bad actors, by definition,
05:20 are not playing by the rules at all. And so, you know, with Reality Defender, we
05:24 focused on inference. We don't touch any watermarking. We don't touch any personal
05:28 data. We actually assume we'll never see the ground truth. We'll never even know
05:32 if it is real or not, which means, instead of saying yes or no, we're taking a more
05:36 measured probabilistic approach. A probability saying maybe we're 95%
05:40 confident, maybe we're 62% confident. We build that into a larger framework of
05:44 just one signal among many to make a better insight, to have a platform or a
05:49 team decide to block or flag a piece of media, or a person, or an action. We're
05:55 gonna adhere to five-minute rounds on the first round. I hope to come back to
06:00 this line of questioning, and I apologize that others of you, Mr. Ahmed, you may
06:04 have some comments as well, but in deference to my colleagues who have
06:07 other commitments, I'm gonna turn to the ranking member. Thank you again, Mr.
06:11 Chairman, and thanks to everybody for being here. Mr. Coleman, you raised in
06:15 your opening statement what is, I think,