• last year
Stuart Armstrong, Co-founder and Chief Technology Officer, Aligned AI Tulsee Doshi, Director and Head of Product, Responsible AI, Google Brian Patrick Green, Director, Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University Moderator: Rana el Kaliouby, Affectiva, Brainstorm AI
Transcript
00:00 Fear surrounding AI extends beyond job replacement
00:03 and productivity concerns.
00:05 There are many who believe that in the rush
00:06 to commercialize AI, safety and ethics
00:09 are not being prioritized appropriately.
00:12 President Biden's executive order stated
00:14 that responsible AI has the potential
00:16 to help solve urgent challenges
00:18 while making our world more prosperous,
00:20 productive, innovative, and secure.
00:23 As we all know, implementing secure and responsible AI
00:26 has proven a little bit more difficult than we thought.
00:29 Here to discuss best practices for mitigating biases,
00:33 ensuring safety, and building responsible AI,
00:36 please welcome Stuart Armstrong,
00:38 co-founder and chief technology officer at Aligned AI,
00:41 a startup focusing on teaching AI to align to human values.
00:45 Tulsi Doshi, director and head of product
00:48 for responsible AI at Google,
00:50 where she has led more than 30 launches
00:52 to make their products more inclusive.
00:55 And Brian Patrick Green,
00:56 director of technology ethics at Marcula,
00:59 Center for Applied Ethics at Santa Clara University.
01:03 Brian is also the author of "Space Ethics,"
01:05 where he explored how ethics relate to space exploration.
01:09 They're going to be interviewed
01:10 by Fortune Brainstorm AI co-chair
01:12 and co-founder and CEO of Affectiva, Dr. Rana El-Khaliouby.
01:17 Before we dive right in, please watch this video
01:20 from one of our partners' checkpoint.
01:24 (upbeat music)
01:26 (upbeat music)
01:29 - All right, thank you all for being here.
01:56 We're gonna be discussing how to make sure
01:58 our AI is responsible and safe.
02:00 And Tulsi, I wanna start with you.
02:02 Exciting news from Google this week, launching Gemini.
02:06 Tell us a bit more about,
02:08 I mean, Gemini is one of the first
02:09 multi, truly multimodal AI models out there.
02:12 So tell us a bit more about that.
02:14 And then how do we ensure that it's built
02:17 in a way that mitigates data and algorithmic bias?
02:20 - Yeah, it's a great question.
02:21 So yes, we are very excited about Gemini.
02:24 And when we say multimodal,
02:25 really what we're talking about is the ability
02:27 to engage across modalities.
02:29 So things, text, image, video, audio,
02:33 really bring able to bring these efforts together, right?
02:36 So think about being able to take an image
02:38 and ask a question about that image and get an answer, right?
02:41 And these combinations of things open us up
02:44 to a wide variety of use cases
02:46 that we're really excited about.
02:47 And that has a lot of potential
02:49 that we're really excited about,
02:50 but introduces a bunch of new responsibility considerations.
02:54 And so a lot of what we've been thinking about
02:56 as we're building Gemini is how do we take everything
02:59 that we have learned in responsibility
03:02 and adapt it to this new structure
03:04 and these new capabilities, right?
03:05 So we released the AI principles,
03:07 Google's AI principles five years ago now.
03:10 And those AI principles cover things like safety,
03:13 fairness, transparency, security, and safety.
03:18 And so now we're saying, well,
03:19 what does that mean in the context of multimodal?
03:22 And what does that mean at every stage
03:24 of the development process?
03:25 So how do we think about safety in our training data?
03:28 And what does it mean to ensure
03:29 that we have really high quality training data?
03:32 How do we then translate that to the model?
03:34 And what are the different use cases
03:36 that might come up in the context of multimodality?
03:39 So for example, an image by itself
03:42 might be a perfectly safe, reasonable image.
03:44 Text by itself might be perfectly safe,
03:46 but now the combination of the two is actually offensive.
03:48 And so how do we develop new metrics,
03:50 new ways of evaluating that account for those concerns?
03:53 And then also then the various kind of downstream use cases
03:56 and applications.
03:57 And so for us, it's really been,
03:58 how do we build those AI principles in?
04:00 How do we build the practice of the work
04:02 that we've been doing every step of that way?
04:05 And that's really what we've been working on
04:06 and are gonna continue to work on
04:08 as we keep expanding Gemini outwards.
04:09 - Amazing.
04:10 Stuart, you're on a mission to align AI algorithms
04:14 with human values.
04:15 - Yes.
04:16 - What does that mean?
04:17 And also, are human values universal?
04:20 (Sara laughs)
04:21 - Let's not get into that one.
04:23 (all laugh)
04:24 The answer is universal enough for AI purposes.
04:29 Well, it's just the outgrow from the very first program.
04:34 It's get the machine to do what you want it to do
04:38 and not do what you don't want it to do.
04:41 It sounds simple, but that's,
04:44 failures of that cause all the disasters today.
04:47 Nobody wants their algorithm to be biased.
04:51 Nobody wants their algorithm to go crazy
04:54 when it goes off distribution.
04:56 Nobody wants adversarial images to break your classifier.
05:01 Everybody, and the thing is that alignment is a capability.
05:06 It's not just sort of a safety measure
05:10 like a fourth airbag on the roof or something.
05:13 It's why are the consumer service bots so terrible?
05:18 For instance, you can have powerful ones.
05:23 You could put GPT-4 behind it,
05:26 but GPT-4 is not reliable enough.
05:28 And none of the models today are reliable enough.
05:32 So you can't use them for that
05:34 'cause then they'll start insulting the customers.
05:37 This'll blow up.
05:38 So because this chatbot is not aligned
05:41 with the designer's intent, you can't deploy anything
05:45 but the most simplistic safe lockdown models.
05:48 - Brian, what concerns you as it relates to AI?
05:52 - There are--
05:53 - Can you also comment on the intersection of space
05:56 and AI ethics?
05:57 I know we spent a lot of time there.
05:59 - There are lots and lots of things to be concerned about.
06:02 All the way from safety, all the way to what does this mean
06:06 in terms of human self-conception?
06:08 Because if we externalize what we think makes us human,
06:10 if we think that we're human because of our intelligence
06:13 and then we externalize it from us
06:14 and it becomes better than we are at thinking,
06:17 then what does that mean about us?
06:18 What's left with what human beings are?
06:21 And we can see this in the way it's gonna affect labor
06:23 because people identify with their jobs.
06:25 There are all sorts of these kind of effects.
06:27 Once again, running the full spectrum
06:29 of anything that we can apply intelligence towards
06:32 is going to be potentially an ethical issue related to that.
06:35 And so, yeah, if you wanna talk about space,
06:37 there are plenty of issues related to that
06:40 in terms of how is AI going to be used
06:44 in the design process, how is it gonna be used
06:46 in how tracking debris around the Earth
06:49 or tracking satellites or coordinating
06:51 large satellite constellations.
06:53 There's just a lot of things to think about.
06:55 - So you teach ethics and I'm curious,
06:57 do you see young people interested
06:59 in issues related to ethics and inclusion?
07:04 - Yeah, this has actually been really fascinating.
07:06 Over the last four or five years,
07:08 my class has gone from having just a few interested students,
07:11 maybe a half, the class filling up halfway,
07:14 all the way to having a long waiting list.
07:15 So I think that people have recognized now
07:17 there's a lot happening.
07:18 - I think that's great news for the world.
07:22 (laughing)
07:23 - So do I.
07:23 - So I wanna ask about your views
07:25 on Biden's executive order and the EU AI Act.
07:29 So Tulsi, what do you think about that?
07:31 - I mean, I'll start by saying I think it's amazing
07:34 that we are having this conversation, right?
07:36 Because we do need really good regulation.
07:39 And I think that is actually a really critical part
07:41 of making sure that we are building systems
07:44 that are safe and consistent across the industry.
07:47 And I think there's a lot to value in both the EU AI Act
07:50 and in Biden's executive order in terms of
07:53 how we think about high risk systems,
07:55 in terms of the way, for example,
07:56 the executive order is setting up an appropriate
07:58 hub and spokes model that actually really takes into account
08:01 all of the amazing work that organizations like NIST
08:04 and other organizations have been doing for years
08:06 to really build out and understand what risk means
08:09 and how we develop strong systems.
08:11 I think there's nuances now that we need to figure out,
08:14 right, so what does a dual use foundation model mean?
08:17 And how does that definition apply across companies
08:20 and organizations and across the industry?
08:22 And so I think we're at a really good point
08:24 to now be able to have those conversations, I think,
08:26 and really figure out what does it mean
08:29 when the devil gets into the details, I think.
08:31 - I'll be taking audience questions soon,
08:33 so tee up your questions, but Stuart,
08:35 what do you think about AI regulation in general
08:38 and specifically the executive order?
08:40 - Well, there's an expression that you can't manage
08:43 what you can't measure, and that's a big problem with this.
08:47 So we wanna get rid of bias, but what is bias?
08:51 And that's one of the reasons that today
08:54 we're launching a product for measuring gender bias
08:57 is the first one, and removing it from language models
09:00 and other generative models.
09:02 - Can you talk a little bit more specific,
09:04 give us an example of how it would work?
09:06 - Well, I deployed it in creating stories,
09:12 and it was quite interesting that female protagonists
09:16 tend to have very passive roles.
09:20 Male protagonists tend to have very active roles.
09:23 So it was, there was a princess who was beautiful,
09:29 found a cuddly dragon who became her protector.
09:33 There was a prince who found a fierce dragon
09:35 but wasn't afraid, and they became friends.
09:38 Those sort of biases creep in all the time.
09:41 We actually found that fiction has a lot more bias
09:45 than, say, even professional things.
09:48 You ask for top professions for men
09:51 or top professions for women, that's less bias
09:54 than asking for a romance story,
09:57 what does the protagonist do?
09:59 - The problem is these large language models
10:01 are trained on all of this data, right?
10:03 - Yes, and that's another problem,
10:06 is that you can't just tell them, be less biased.
10:09 To some extent you can tell to a human,
10:12 you have very biased training data, account for this.
10:16 But you can't tell that to an AI
10:17 'cause they don't understand,
10:19 they don't generalize concepts in the same way we do,
10:22 and that's another thing that we're working on
10:25 to allow models to use concepts in the way humans do
10:30 and present this to humans in a human understandable way
10:34 and say which of this approach is better than that.
10:38 That's another product we're launching today.
10:41 We're launching lots of products today.
10:43 - That's awesome.
10:44 Ryan, what's your view on regulation?
10:46 - I think that there's more than enough out there
10:49 to be regulated, would be the way to look at it, right?
10:51 There's so many things happening.
10:54 But the real question is how to apply this regulation.
10:56 So one of the things that I think of as an ethicist
10:59 is that if everybody just made the right decision
11:02 in the first place as an individual
11:03 within their organization,
11:04 and the organization had different ways to verify this
11:08 or facilitate these good types of decisions,
11:11 if the organizational structures were set up right
11:14 within the organization, nothing bad would ever get out,
11:16 and then the government would never have to get involved
11:18 with regulating it.
11:19 But that's not the world that we live in.
11:21 So the question is how do we set up a situation
11:23 where the people who are closest to the technology
11:25 and the technological development
11:27 have the resources that they need
11:28 in order to make the decisions that they need to make
11:31 and do the best that they can,
11:32 knowing also that not every organization
11:35 is going to be as good as some organizations,
11:37 and also some individuals are not gonna be
11:39 as good at this as others.
11:40 So we really need to have a structural solution.
11:43 In engineering ethics,
11:44 there's something called the Swiss cheese model
11:46 where you line up the pieces of Swiss cheese,
11:48 and some of the holes line up in the problem,
11:50 get stopped by some layers, but it gets through to others.
11:53 If you have enough layers,
11:54 then the problem doesn't get through,
11:56 but you really need to have the layers,
11:58 and if you don't set those up,
11:59 then the problems will get through.
12:01 - Actually, I wanna quickly follow up on that.
12:03 So in your classes,
12:05 do engineering and computer science students,
12:08 are they required to take your ethics classes?
12:10 - So they are required to take an ethics class.
12:13 They don't necessarily take my ethics class.
12:15 - Okay, that's good.
12:16 That's still good.
12:18 Okay, questions.
12:19 Do we have any questions?
12:21 I don't see any questions yet.
12:23 All right, no questions yet.
12:25 So let me follow up on the idea
12:27 of how to align your organization
12:29 to be bought into implementing ethics.
12:32 How do you do that at Google?
12:34 - Yeah, it's a great question.
12:35 I think there's a few different pieces
12:37 you have to pull together
12:38 if you really wanna drive
12:40 organizational cultural change, right?
12:42 I think the first is actually having a vision
12:45 and a North Star for the organization to align on.
12:48 And so at Google,
12:49 the AI principles is that vision for us, right?
12:51 Really setting a North Star with seven principles
12:55 that we really want to make sure
12:56 that our products hold themselves accountable to.
12:58 And then valuing that progress, right?
13:00 Rewarding really great work
13:02 that drives that work forward,
13:04 measuring ourselves across that.
13:06 We've established an AI principles review process
13:09 that actually reviews,
13:11 we review all of these AI launches coming through
13:14 to actually make sure they're meeting that criteria.
13:17 And we're also building tools and resources
13:19 to actually make it easier for teams to do this work, right?
13:22 I think starting and really thinking about
13:24 how do you measure bias?
13:25 How do we provide the tools and the resources
13:27 to help answer those questions?
13:28 Because responsible AI isn't one size fits all.
13:31 And we don't wanna make the mistake of saying,
13:34 "Hey, there's a button you can press," right?
13:36 We wanna actually build that in
13:38 to the design process every step of the way.
13:40 And that comes with building the tools and resources
13:43 at every stage of that process
13:45 and raising the right awareness with our teams
13:47 for how they can actually do this work,
13:49 such that when they do go through a review,
13:52 there's no surprises because you've actually
13:54 been building it in from the beginning.
13:55 - So for some of our audiences here
13:58 who are either CEOs or leading AI efforts
14:01 within their organizations,
14:02 do you advise that there be an ethics team
14:06 or how do you integrate it across the entire organization?
14:09 - Yeah, it's a good question.
14:10 And it's something we always struggle with that polarity,
14:13 right? - Right.
14:14 - Because on one hand,
14:15 building responsible systems to be everybody's job
14:18 and it should be everybody's job.
14:19 On the other hand, you also need to have individuals
14:22 who really have the space and the time
14:25 to dive into the hard questions
14:27 and build out the right expertise and the right best practice.
14:30 And so that's kind of why we've built a combination of both.
14:32 So we have central teams like mine
14:35 that really focus on what are the gnarly hard questions
14:39 that are coming out in responsible AI?
14:41 What are the new challenges we need to understand?
14:43 How do we answer questions like how do you measure bias?
14:46 And then we work across teams
14:49 where we have individuals embedded across the organization
14:52 who are thinking about what does inclusion mean
14:54 in the context of YouTube
14:55 or what does it mean in the context of search?
14:58 And we partner directly with those teams
15:00 to kind of share insights back and forth
15:02 to make sure we're learning what problems are they seeing
15:04 in the product that we should be thinking about
15:06 and driving more research.
15:07 What are insights that we have
15:09 that we can bring back into the product?
15:11 And I think honestly that is a very much of a setup
15:13 I would very much start with, right?
15:15 Eventually as we build more and more state
15:17 and more and more insight,
15:18 I often joke that I'll work myself out of a job
15:21 because we really will have built responsibility
15:23 into the entire organization.
15:25 But there's so many new questions that are coming up
15:27 that I think there's a certain amount of work to do
15:30 before we get there.
15:31 - Stuart, how about you?
15:32 How do you work with organizations through Aligned AI?
15:35 - Well, there is a lot of truth in what you're saying
15:39 and you do want to sort of get in the habit
15:41 of bringing ethics.
15:43 But we're also, and we have our organization set up
15:47 so that it requires unanimity amongst the shareholders
15:50 to pivot it away from AI alignment organization.
15:54 So that's never gonna happen.
15:55 But we don't want safety and ethics and alignment
16:00 to be a separate thing.
16:02 We think that if your product is unsafe, you have failed.
16:07 You have not built a successful AI.
16:10 It's, I think Stuart Russell's expression was,
16:13 we don't have one group of engineers who build bridges
16:17 and a second one that goes around and points out
16:19 that they might fall down, so you should change that there.
16:22 You just build bridges.
16:23 So you want to build AI that is successful, that is Aligned.
16:27 Alignment is a capability.
16:30 So we want everyone to be doing it
16:34 because that's their job.
16:36 If their AI is racist, they have failed.
16:41 So that's, there is of course,
16:46 you're gonna have to have some people looking over
16:48 the ethics and other parts of the team as you described it.
16:51 But the fundamental thing that we want is
16:54 that people understand alignment is success.
16:58 This is a capability.
17:00 - Yeah, I think actually building on that,
17:02 we've been talking a lot about how if you really want
17:05 to build bold, successful AI,
17:07 you have to build responsible AI.
17:09 And I think the more we talk about responsibility,
17:12 the same way we talk about quality,
17:13 the same way we talk about accuracy, right?
17:15 These are, building a responsible product
17:17 is a better user experience.
17:19 And so you actually are building a better experience
17:21 for your users and so actually baking it
17:23 into the core fundamentals of how you build the product,
17:26 I think is actually a really critical point.
17:28 And I think it's more of a question of
17:30 how do you instill that?
17:32 How do you give teams the resources to do that work
17:36 such that it does start becoming a part of that
17:38 in the same way?
17:39 - And just to build on that,
17:40 that's what the Markkula Center tries to do
17:42 with many of our materials on our website
17:43 are just free materials we're putting out there
17:45 to give people the tools that they need
17:47 in order to make the good ethical choices
17:49 that they already want to make.
17:50 So we have, we published a book back in June,
17:52 which is called Ethics in an Age of Disruptive Technology,
17:55 an Operational Roadmap.
17:57 And it goes through five steps.
17:58 It starts off and it's set up like a road trip.
18:01 First, you decide you want to go.
18:03 Second, you figure out where you are right now.
18:05 Third, you figure out what your destination is.
18:07 Fourth, you figure out what you need
18:08 in order to get from here to there.
18:10 And fifth, you make sure you're tracking your progress.
18:12 So, and all of these things,
18:13 and it's all from an ethical perspective, right?
18:15 To make sure that you're thinking about
18:17 how to really implement these tools,
18:19 these ways of thinking into the organizational structure
18:22 to make sure that you have everything that you need.
18:24 - This is great.
18:25 Well, thank you so much.
18:26 My takeaway from this panel is that
18:28 we're at a time where implementing responsible AI
18:31 and safe AI is actionable, it's very doable,
18:34 and there's a way to implement that
18:36 across your entire organization.
18:37 Thank you.
18:38 - Thank you.
18:39 [BLANK_AUDIO]

Recommended