• last year
Daniela Amodei, Co-founder and President, Anthropic Lila Ibrahim, Chief Operating Officer, Google DeepMind, Pilar Manchón, Senior Director of Engineering, Research Strategy, Google AI; Board Member; Eventbrite, Moderator: Maryam Banikarim, Co-founder, NYCNext; Founder and Managing Partner MaryamB; Co-chair, Fortune MPW Summit

Category

🤖
Tech
Transcript
00:00 All right.
00:03 One of the biggest topics of conversation happening everywhere from boardrooms to dinner
00:07 tables of course is the power and impact of generative AI.
00:13 And there are varying levels of excitement, confusion, and even fear.
00:18 Questions such as how can AI be used?
00:21 Will it kill jobs or create jobs?
00:24 Can it be trusted to be accurate?
00:27 What about the spread of misinformation and bias?
00:30 How can it be implemented properly and securely?
00:34 And when is the right time to jump in?
00:37 They're all genuine concerns and important points of conversation.
00:41 And one thing is for sure, AI is here to stay.
00:45 So we want to have a real conversation about the power of machine generated intelligence.
00:51 We're joined by three women who are leaders in this space, Daniella Amadei is president
00:57 and cofounder of Anthropic.
00:59 Lila Ibrahim is COO of Google DeepMind.
01:03 Pilar Manchon is senior director of engineering research strategy at Google AI and a board
01:09 member of Eventbrite.
01:11 This conversation will be led by Mariam Bekhan-Bankirim, cofounder NYC Next and founder and managing
01:19 partner of Miriam B. and MPW Summit co-chair.
01:23 Please welcome them to the stage.
01:31 Okay, so I think we're between you and lunch.
01:43 So with that note, I'm going to get going.
01:46 Lila, in the last two years, you can't avoid stories about AI.
01:50 In fact, I think the session before was oversubscribed.
01:54 Billions are being invested in AI globally.
01:56 It's a brave new world and it's rushing at us at breakneck speed.
02:00 You've been in tech for 30 years.
02:02 What are the one or two key things you think everyone in this room should know?
02:07 By the way, both from a business perspective, we have big questions here, big, big topics.
02:12 But also as humans, right, because I think we wear sort of two hats, mother, daughter,
02:16 son, and C-suite executive.
02:20 What a great place to start.
02:22 You can imagine this year I have had a lot of conversations about AI.
02:27 And actually every single conversation deals with risk and responsibility.
02:33 How are we going to develop this technology in a responsible way?
02:36 What are the risks?
02:37 Those are very, very important conversations I think we'll probably touch on today.
02:41 But I really wish someone would just ask me, what are the opportunities?
02:47 What could this really unlock for us?
02:49 How could AI be used as a tool to help understand diseases, come up with solutions, find solutions
02:56 to the climate crisis, create new materials?
02:59 There's so much possibility.
03:02 And so I would love us, I think everyone here, to be thinking about what are a couple of
03:06 opportunities both personally and professionally where you're excited where AI could transform
03:12 the world ahead so that we can actually leave a better place for our kids or their generation
03:19 or those who follow us.
03:21 So that would be my wish for the conversation.
03:25 Great.
03:26 Pilar, top tech executives, Musk, Zuckerberg, and Gates, discuss the future of AI and they
03:32 all agree that there's some role for government to play.
03:36 But yet, legendary VC Bill Gurley says that regulation is the friend of the incumbent.
03:43 So we understand the benefits of competition and innovation.
03:47 We've also all lived through the Internet Revolution.
03:50 So we also know that some guardrails are needed for unintended consequences.
03:54 What are the general areas that we should be focused on from both the safety and regulation
03:58 perspective and do they vary like country by country or is this now a global conversation?
04:04 Well, I think that it's clear to everybody that regulation is needed and it's important
04:10 that it's the right regulation so that we don't impede innovation.
04:13 So as Layla said, it's important to take into consideration the breadth of opportunities
04:21 that AI will bring us.
04:22 So we want to embrace that.
04:24 But the kind of regulation that we need has to focus on transparency, control, and being
04:32 auditable so that regulators and people and all other organizations, not just AI people,
04:40 can understand what AI is doing, why is it doing it, how is it doing it, where is the
04:44 data coming from, and so on.
04:46 So more transparency around that is probably what everybody is claiming for.
04:52 And also an understanding of what the governance processes are, not only from the large corporations
04:57 like Google and some of the other bigger companies, we have very strict governance processes internal
05:04 to the company and we share those processes but we also share the tools for other companies
05:09 to be able to have their own governance processes.
05:12 And that is extremely important because no matter how many guardrails and how many regulations
05:16 you put in place, if you don't take into account what you're doing with your own data, with
05:21 your own processes, with your own applications, then things could potentially go wrong.
05:26 Okay, I have a lot of questions on that.
05:28 Okay, Daniela, a couple of interesting things about your company, Anthropic, as I did my
05:33 homework.
05:34 Your chatbot, Claude, has been trained using something called constitutional AI.
05:39 What is that and why should we care?
05:42 And you're set up as a public benefit core, which is different than a B-core, and I know
05:46 that that was intentional, so same kind of question, like what is that and why should
05:50 any of us pay attention to that?
05:52 Sure.
05:54 So I'll start by talking about your first question, what is constitutional AI?
05:58 So really the goal of training a language model like Claude is to be helpful to people,
06:06 to ensure also that it is harmless and that it is honest.
06:10 So we have this kind of triple H framework that we apply at Anthropic to the models that
06:14 we train.
06:15 And just to give sort of a very brief history lesson, the way that these models were sort
06:20 of trained, the gold standard for them in the past, was by using a technique called
06:24 RLHF, or reinforcement learning from human feedback.
06:28 And what that means is essentially the model would give an output and a human, or a lot
06:33 of humans actually, would give it a thumbs up or a thumbs down.
06:36 So if it said something dishonest, it would get a thumbs down.
06:39 If it said something honest, it would get a thumbs up.
06:42 And this is a technique that we pioneered really at Anthropic in our previous company,
06:48 and we were wondering are there ways to help advance this kind of safety.
06:51 What was your previous company?
06:53 OpenAI.
06:54 Okay.
06:55 Yes.
06:56 And before, when we decided to go co-found Anthropic, we were still using RLHF, we still
07:02 use it, but we had this insight that potentially you could train these models using a different
07:09 type of technique.
07:10 And so our team of researchers created this idea of constitutional AI.
07:15 And all that means really is that you give the model a constitution for how to behave.
07:20 So instead of scoring specific examples as good or bad, you say, okay, what are the values
07:26 that we really want this model to have?
07:28 And we incorporated more than two dozen different types of documents in that, things ranging
07:33 from the UN Declaration of Human Rights to trust and safety terms of services from major
07:39 companies and businesses.
07:41 Who writes the constitution, right?
07:42 Because we know in our constitution, there was a collective that sort of had a voice
07:46 in that, and there's lots of opinions about whether that was good or not, or inclusive
07:49 or not.
07:50 Who's writing that constitution?
07:52 So it's a great question.
07:54 Part of why we chose to use such a large corpus of different documents to go into the constitution
07:59 was for that exact reason.
08:01 We said, rather than create a constitution on our own, let's look at documents that are
08:06 related to things like human rights and ethics, and also have been used by companies for decades
08:12 sometimes to ensure that users are treated fairly and that the models are developed ethically.
08:17 So rather than trying to arbitrate ourselves, we really picked a wide range of different
08:22 documents to go into that.
08:24 And this is a distinct difference from your former company.
08:27 Constitutional AI is something that we pioneered, but we've actually published our safety research
08:31 on it.
08:32 And so if you've seen our website, anthropic.com, we published more than 20 safety research
08:39 papers, including the techniques that got us to constitutional AI.
08:42 I didn't answer your PBC question, but we can come back to it.
08:45 We can come back to that.
08:46 Sounds good.
08:47 I wanted to open it up because I know that many of you have questions.
08:50 And while we figure out where the question is, because it's pitch black, and where the
08:55 mic is, I will ask one other question, which is, as of April, 35 years into the internet,
09:02 there were five-- I can't read-- 5.2 billion internet users worldwide.
09:07 So 65% of the population is connected.
09:10 That means 35% of the population is still not connected.
09:14 What are the key ways to think about the inequality issues and the digital divide?
09:19 Because as this moves at rapid speed, we've already left some people behind.
09:23 What are we going to do to not aggravate that situation further?
09:28 And Pilar, why don't we start with you?
09:30 Sure.
09:31 I think that one of the great advantages of AI is that it democratizes access, access
09:36 to resources, access to information, access to training.
09:40 So in all likelihood, a lot of the people that don't have the right kind of resources
09:45 don't have access to it.
09:47 And by building the tools that we are building and providing general access, most of the
09:52 time it's free.
09:54 But don't I still need a device and connectivity?
09:56 You do need a device and connectivity, unless that device is provided for, you know, not
10:01 per individual, but for, say, a classroom or a group or something like that.
10:07 And that is actually being done in some of the -- for instance, Google.org and some of
10:11 the other foundations that we're aware of.
10:14 But it is important to understand that the device is not enough.
10:16 You can have a device and still not understand how to use it.
10:19 It's not going to educate you.
10:21 It's not going to help you in that sense.
10:23 The device is a means.
10:24 And artificial intelligence can actually coach you, teach you, guide you, make you better,
10:31 give you motivation.
10:32 There are so many things that can be done with artificial intelligence that do not require
10:37 another person to be there doing it for you, that it opens the door to a fantastic growth
10:42 and expansion of, you know, everything everywhere.
10:47 How about you, Pilar?
10:48 I'm sorry.
10:49 Lila.
10:50 Lila.
10:51 You know, equity in AI is absolutely a critical topic.
10:56 As mentioned by Pilar, I think, you know, we have to get people connected to the Internet
11:03 and learn how to use technology in the right way.
11:05 I personally do a lot of that.
11:06 It's a personal passion of mine, actually, a nonprofit I have called Team for Tech, where
11:11 we've worked on building out the infrastructure.
11:15 That's step one.
11:16 Step two is how do you -- we need those voices into where the technology is going.
11:22 We've been working with Raspberry Pi Foundation to do experience AI.
11:27 We've specifically targeted to add 11 to 14-year-olds, knowing that having a chance to learn how
11:32 to use technology in a responsible way from a young age is going to be critical for that
11:37 literacy long term.
11:39 But there's still a lot of lifelong learning that needs to happen.
11:42 If I could leave you with one thing about this, though, it's this has to be about collaboration.
11:48 No one company, no one organization can do this on their own.
11:53 We've worked with the Aspen Institute as an example, specifically around equitable AI,
11:58 knowing that this is going to be an issue, to say how do we bring together different
12:02 points of view, different perspectives, have very constructive conversations, and publish
12:08 out what are the learnings about building equitable AI we need to have.
12:12 The literacy is a key aspect of that, have common vocabulary, and do this in collaboration.
12:19 We have to make sure that we learn from past technologies and we start bringing outside
12:24 voices into the development and that we think about this in international terms.
12:30 >> I'm not going to ask a question about AI specifically, but listening to you, it struck
12:39 me that you really are at the very genesis of inventing something that's going to have
12:47 who knows how long of an impact.
12:53 So how do you deal with yourself about that?
12:55 How do you deal with yourself?
12:57 >> You mean ethically?
12:58 >> All of it.
12:59 Like how do you do you talk to yourself about that?
13:02 Like self, I'm inventing the future.
13:04 >> They talk to digital assistants.
13:07 >> Yeah, exactly.
13:10 So how do you deal with yourself?
13:11 I think you're understanding my question.
13:13 How do you deal with yourself and navigate that?
13:18 Actually when you were talking is when the question struck me.
13:21 >> Yeah, that's a wonderful question.
13:23 I love that.
13:24 You know, I think, I mean, first of all, I don't pretend to speak for everyone working
13:28 in this industry.
13:29 I think, you know, for me, it's a little bit of an odd kind of balancing act.
13:37 There's almost a duality that we are facing in this industry.
13:41 And I think some of it is, you know, like Lila talked about, there's incredible potential
13:46 benefits from this technology.
13:48 And I think sometimes they don't get talked about as much.
13:51 But some of the work that is being done in healthcare or science or climate technology,
13:57 right, is incredibly inspiring.
14:00 And even just the sort of productivity gains that can be made through a tool like Claude
14:04 I think are incredibly impressive.
14:07 There's also really big risks that come with this technology, right?
14:10 And to sort of circle back to what you said, you know, part of why we incorporated as a
14:14 public benefit corporation was to say we think it's very important for corporations that
14:19 are working in this area to balance this potential positive benefit with this, you know, this
14:27 sort of risk, right?
14:29 This opportunity but also potential for the technology to be used for harm.
14:35 And so I think from a personal perspective, it's, you know, there's no perfect answer.
14:41 But I think just sort of remembering that both things can be true, right?
14:45 Like there's this potential positive thing that can be gained from doing it.
14:49 We also have to be careful.
14:50 And really this feeling that I think has also been expressed so much on the stage that it
14:54 shouldn't be left up to any one of these companies alone, right?
14:59 This is a multi-company, also international government inclusive, civil society inclusive
15:06 process.
15:07 Maybe that just sort of makes it feel a little bit less like it's personally on you.
15:10 Yeah, it's great.
15:11 It's a great question.
15:12 Hello, Annette Clayton, Schneider Electric.
15:15 I really wanted to talk about speed and velocity because I get the sense and we talked about
15:21 it actually in the breakfast yesterday about how quickly this is moving.
15:25 Could you characterize that for us because a lot of the things we're talking about require
15:29 policy and thinking and understanding and education, but it's also moving very, very
15:34 quickly.
15:35 So your comments would be appreciated.
15:37 Who wants to take that?
15:39 I don't mind taking that.
15:41 I think it's very clear that we're moving very fast because of the exposure to the general
15:45 public.
15:46 AI has been developing very fast for a very long time.
15:49 But those of us who have been immersed in that world, we're already accustomed to that.
15:53 The moment that that became a big part of our everyday and exposed to the rest of the
15:59 world in such way, then there is a sense that everybody has to adopt it almost immediately,
16:04 that you're going to be left behind if you don't.
16:07 And there is some truth to that.
16:09 This is an unstoppable train, so you better jump onto it.
16:12 But you have to do it, in my opinion, in a very responsible way, taking into consideration
16:17 where you are, what kind of business domain and influence the technology and AI will have
16:24 on that, and making sure that you educate not only the top executives on the potential
16:31 of AI, but it trickles down to the rest of the organization.
16:35 So it is moving fast.
16:37 We have to move fast with that, but we also need to do it responsibly and make sure that
16:41 we have the right guardrails in place for that.
16:45 Can I give a very real example, too, though, of what, if done right, we can potentially
16:54 do to benefit the world?
16:56 So we have developed something called AlphaFold, which is a protein.
17:04 It predicts the way that proteins will fold or perhaps misfold.
17:10 And typically, it would take a PhD student about four to five years in a lab to maybe
17:15 do one protein.
17:18 This is a decades, like 50-plus-year-old problem in biology, and we released something called
17:24 AlphaFold, which is a library of all the known proteins to humankind, available for free
17:30 for scientists, biologists.
17:33 So if you think about that we're now doing science at digital speed, how is that being
17:37 used?
17:38 People are, researchers are using this now to understand neglected diseases, which maybe
17:43 didn't have a good business model around them.
17:46 Malaria, vaccinations, plastic eating, enzymes, solving industrial waste.
17:52 So we're at the very early stages, but this is now, we've like leapfrogged an advancement.
17:58 In fact, we just also announced something around being able to determine if something
18:05 is a pathogen or not.
18:08 And previously, less than 1%, 0.1% was known, and we're now up to 89%.
18:13 So this is science at digital speed.
18:15 So there is some benefit to this as well.
18:18 This is a big topic.
18:19 We only had 17 minutes to be specific.
18:23 Thank you so much for coming, and consider not me, but them, your safe space for any
18:27 questions related to AI.
18:30 That's what we agreed to, right?
18:31 Thank you.
18:31 [APPLAUSE]
18:33 [AUDIO OUT]

Recommended