• 7 months ago
Google DeepMind's guardrails "were clearly wrong” after Gemini showed historically inaccurate depictions of people of color, VP says

Category

🤖
Tech
Transcript
00:00you guys created a model that it was actually very hard
00:02to get it to generate images of white people,
00:05even in historically appropriate context for that.
00:07And it was quite embarrassing for you.
00:09How do you overcome issues like that?
00:11Because I think that's one of the problems now
00:14is that people want a model that you can just say,
00:16okay, well, be diverse when that's appropriate.
00:18And when it's not appropriate,
00:21be correct to the historical context,
00:23but you can't just tell the model that apparently.
00:25So how do you overcome this problem?
00:27First of all, thanks, Jeremy, for asking.
00:29It really highlights the fact
00:31that these challenges are difficult, right?
00:34First of all, we want the tools,
00:37the generative AI tools to be relevant to people
00:41all around the world.
00:42So if you ask a model to create images of, let's say,
00:47a nurse in a hospital or a school teacher
00:51or a board member of a company,
00:53it has to be relevant to people all around the world.
00:56Google is a global company.
00:57We have people using our tools on every continent.
01:02But the guardrails that we put in in that case
01:06were clearly wrong.
01:07We came out and we actually said that.
01:10We then pulled the image generation for Gemini off
01:15so that we can improve it and put it back out
01:17in a way that's more acceptable to more people.
01:20These are cutting-edge tools.
01:22You put them out there.
01:23People find them really useful sometimes.
01:26Sometimes they find them offensive, right?
01:28And I think this is a healthy thing
01:32that the whole AI community
01:33is really grappling with right now.

Recommended