Murati sat down with editor-at-large Michal Lev-Ram at a Fortune MPW dinner in San Francisco to discuss the Apple partnership, new CFO Sarah Friar, customer trust and safety, how she found her love for AI, and more.
Category
🤖
TechTranscript
00:00 Mira, thank you for being here with us.
00:02 I know you must be a little bit busy.
00:06 So I have like a gazillion questions for Mira,
00:09 but we have 20 minutes.
00:10 So I just want to set everybody's expectations.
00:12 We're going to stick to a few topics,
00:14 including some recent news and some areas
00:18 that Mira oversees as CTO.
00:21 And hopefully we can dive into some of those.
00:24 I think my first question for you
00:26 actually is just given how insanely busy
00:29 and given the barrage of news that some good, some bad.
00:35 You joined around six years ago, I think.
00:39 At that time, it was a very different organization.
00:41 You were, relatively speaking, kind of off the radar.
00:47 Do you ever miss those days of being able to be heads
00:51 down and doing the work?
00:54 I mean, we're still very much heads down doing the work.
00:57 It's just that the work has evolved.
01:01 And it's not just about the research.
01:05 It's also now that research has progressed so far,
01:10 it's also about figuring out how do we
01:12 bring this into the world in a way that's helpful and safe.
01:18 So the mission remains the same, but we've
01:20 made a lot of progress on research,
01:23 and the work has expanded.
01:26 And yes, there is a lot of public attention
01:28 on it as well, which can feel unusual when you're
01:33 working on technology and building products.
01:36 But it's also very much necessary,
01:39 given the importance of what we're doing.
01:41 And scrutiny is good.
01:44 OK.
01:45 Let's go back even further.
01:47 For those who don't know your trajectory,
01:51 give us kind of the highlights.
01:53 I know you grew up in Albania at a time
01:55 when even decent access to the internet wasn't a given.
01:59 How did your interest in AI start,
02:02 and what do we need to know about what
02:04 led you to your current role?
02:07 I've always been interested in math and physics.
02:11 And so my background is very much math heavy.
02:16 And I went on to study engineering,
02:19 worked in aerospace, and then in automotive at Tesla.
02:24 And that's where I started to get more and more interested
02:27 in applications of AI in the real world
02:30 through self-driving cars.
02:32 And then I went on to apply AI and computer vision
02:36 in applications of virtual reality and augmented reality,
02:42 sort of trying to understand how do you create a new interface
02:46 where you interact with information in a more
02:49 natural way, more intuitive.
02:52 And that's sort of where I was pushing
02:55 the applications of AI in the real world in different domains.
03:00 And then thought about how do I learn the most about AI.
03:06 And at the time, I was considering applications
03:13 of AI in different domains and thought ultimately
03:17 what I wanted to do the most was learn
03:19 about general artificial intelligence
03:22 and really diving deep into research.
03:26 And then from there, understanding
03:28 what we could do with it in the real world.
03:31 And I was really attracted to OpenAI's mission, which
03:36 is at the time OpenAI was a nonprofit,
03:39 and now we're a capital profit.
03:40 But the mission is still the same.
03:43 It's the company, both in extent.
03:47 - OK, I want to get into some of the latest news.
03:49 And again, there's a lot.
03:52 But I know Mariam mentioned Sarah Fryer.
03:56 You hired on just yesterday.
03:58 The company announced that you have a new chief product
04:01 officer and CFO.
04:03 And I think probably a lot of the women in this room
04:06 are familiar with Sarah.
04:09 She is a seasoned public company executive,
04:12 both former CEO and CFO.
04:14 And I feel like it's kind of like when you get married
04:17 and the next day someone asks you,
04:18 when are you going to have kids?
04:20 This is like you hire a seasoned public company executive.
04:24 And the next day people ask you, what should we
04:26 be reading into this as an IPO in the midst?
04:31 How should we be deciphering these hires?
04:33 What does it mean for the company?
04:35 - What it really means is that we're
04:37 in this next phase of the company
04:40 where we're serving hundreds, millions of users,
04:44 and developers, and companies out there.
04:47 And we need to have the most competent and best executives
04:52 helping us grow in this next phase of the company.
04:56 And we're very excited about Sarah and Kevin
04:58 and the skills and the leadership
05:01 they're bringing to the company.
05:03 - OK.
05:04 I had a feeling you wouldn't say anything about the IPO.
05:07 [LAUGHTER]
05:10 So I want to talk about the news with Apple.
05:15 I'm guessing everybody saw this or you
05:17 wouldn't live in this area.
05:19 So help us unpack a little bit from a technology perspective
05:29 just how differentiated and significant this is.
05:34 I know Apple-- and this is Apple, not OpenIS--
05:36 but Apple referred to it as the most advanced security
05:39 architecture ever deployed for cloud AI compute at scale.
05:45 First of all, do you agree with that for this integration
05:48 with Apple Intelligence?
05:49 And again, just help us understand
05:51 what's going on behind the scenes that theoretically makes
05:55 it so private and secure.
05:58 - Well, so the partnership with Apple is--
06:02 we're super excited about it because it's
06:04 such a great opportunity for us to bring our technology
06:08 to as many people out there as possible in a very seamless way
06:12 where they don't have to switch devices.
06:14 They can interact with--
06:17 the technology comes to you in the best way possible
06:21 and safest way possible.
06:23 And the partnership with Apple is very aligned.
06:25 We care very deeply about privacy and safety
06:29 of our products and how we develop the technology
06:33 and how we ultimately deploy it.
06:35 So that's very much--
06:38 we're very much in alignment.
06:39 And it's going to push us forward in a better direction.
06:45 - So Apple, obviously, has been known
06:49 for taking a privacy and secure approach to rollouts.
06:54 Do you think that for people who have anxiety
06:57 about interacting with Gen AI products, not just Chat GPT,
07:02 do you think that this kind of helps
07:07 push in that general direction of more trust, ultimately?
07:13 - We have to do a lot to build trust around AI in general.
07:19 There are so many aspects to it.
07:21 And I think this partnership is hopefully
07:25 something that moves the needle in the right direction
07:28 as we bring AI into so many products
07:32 and different integrations.
07:33 And specifically with our partnership,
07:37 people can log into Chat GPT or through their Apple account.
07:42 And we each have the product policies
07:47 that apply to either Chat GPT or Apple.
07:52 But we both strongly believe in privacy and safety
07:57 of the products.
07:58 We will not be logging information
08:01 through the Apple accounts.
08:06 But when people log in on Chat GPT, then we will be--
08:12 and they have not specifically opted out from us
08:17 being able to look at the data.
08:20 Then in those cases, we will be looking at the data.
08:25 And when people have specifically opted out,
08:28 then we will not be looking at the data.
08:30 More broadly, we do not train on any customer data in general
08:37 or data where people have specifically opted out.
08:43 - And by the way, on that note, the fact
08:45 that the models can't train on the user data of Apple OS users,
08:53 is that a downside here to pushing forward
08:58 the ability for these models to keep training?
09:00 I mean, that's a lot of data.
09:01 - Look, the reality is that we have to do
09:07 the right thing for the users.
09:10 And it's very important that people
09:14 are trusting the technologies and the products
09:18 in which they're deployed.
09:20 And it's important for people to have
09:22 control over their data, how it's used,
09:25 and very important for them to understand how it's used
09:30 and where it's used.
09:32 So that comes first.
09:33 And that's a priority, regardless of the impact
09:36 that it has in how much it drives the technology forward
09:42 or not.
09:43 - So you're going to have to forgive me because I'm
09:45 going to quote Elon Musk.
09:46 I'm sorry.
09:48 But he is one person that is not in agreement
09:52 with this being a model for privacy and security.
09:54 He called the integration creepy spyware.
09:58 What's your response to that?
10:00 Don't care?
10:06 - That's his opinion.
10:09 I mean, I'm not--
10:12 obviously, I don't think so.
10:13 We don't think so.
10:15 And we care deeply about the privacy of our users
10:19 and about the safety of our products
10:22 and how we're developing and deploying this technology.
10:27 Our track record in deploying the technology
10:29 safely is very strong.
10:32 We've done that with GPT-3, 4.
10:35 We're doing the same thing with 4.0 right now.
10:37 And we're trying to be as transparent as possible
10:40 with the public and communicate how we're making decisions.
10:44 We have a preparedness framework that
10:46 shares how we think about frontier models
10:49 and our approach to safety there.
10:53 And we will continue to be transparent
10:55 through the preparedness framework,
10:57 through things like the spec, where
11:00 I don't think anyone else has such a developed and transparent
11:05 framework for what happens with model behavior.
11:09 It's from the high-level rules all the way
11:12 to very technical decisions and how complex they are.
11:17 We're sharing that with the public.
11:20 And we're providing an opportunity
11:21 for people to provide input.
11:24 And we're also working on a lot of experimental ways
11:28 to gather input from non-users as well.
11:32 I think these are the important things.
11:34 The work is not done.
11:35 We have a lot more to do here.
11:38 And not just OpenAI.
11:40 I think the whole industry in explaining
11:43 what's going on with the technology,
11:46 creating more participation, more agency.
11:49 But we have to start with actually explaining.
11:53 Because the biggest risk is that stakeholders
11:57 misunderstand the technology.
12:00 So in the midst of all of these developments
12:04 and current partnerships that you guys are striking,
12:09 you are also currently training a new model, a new AI model,
12:13 which is expected to bring the company closer to AGI.
12:16 And I know you spend a great deal of time
12:19 thinking about this.
12:20 And also thinking through worst case scenarios,
12:24 best case scenarios.
12:25 I'm curious-- and I think probably everybody in this room
12:28 is very curious about what that future looks like.
12:31 Can you talk to us a little bit about how you think about it,
12:37 how you scenario plan for it, and what's
12:39 like the Mira case scenario?
12:42 What's the ideal view of this future in your view?
12:46 So my framework, we're thinking about the progress
12:54 that we're making and where this technology is going.
12:58 It's definitely going to be incredibly
13:01 transformative technology.
13:04 And it requires a lot of work, not just
13:08 from the companies that are developing and deploying
13:11 these technologies, but also from other stakeholders
13:15 like civil society, media, governments, regulators,
13:21 the general public, people in every domain.
13:25 We have to create some sort of shared responsibility
13:29 around the general preparedness for bringing this technology
13:33 into the world.
13:35 But in order to have shared responsibility,
13:38 we need to understand it, and we need to make it very accessible.
13:43 And so we've been focusing on these last two parts,
13:49 making it accessible, engaging with people,
13:52 educating various stakeholders on what the technology means,
13:58 what it's capable of.
14:01 The most recent thing we did was with the release of GPT-4.0,
14:06 which is our omni-model.
14:09 We made it accessible to everyone for free.
14:14 And I don't think there is enough emphasis on how unique
14:24 that is for the stage where the technology is today,
14:27 in the sense that inside the labs,
14:30 we have these capable models, and they're not that far ahead
14:37 from what the public has access to for free.
14:41 And that's a completely different trajectory
14:44 for bringing technology into the world
14:47 than what we've seen historically.
14:49 And it's a great opportunity because it brings people along.
14:52 It gives them an intuitive sense for the capabilities and risks
14:58 and allows people to prepare for the advent of bringing advanced AI into the world.
15:07 And obviously, the opportunities are huge.
15:09 Now, it's normal that we talk a lot about the risks,
15:14 because they're so important.
15:17 But also, we wouldn't do this if we didn't believe
15:20 that the opportunities here are huge.
15:23 I personally think that there is just this incredible promise
15:31 with advanced AI in healthcare, in education.
15:35 We're just kind of touching on some of these capabilities and opportunities
15:41 with personalized tutors, like from Khan Academy.
15:44 But you can imagine that anyone in the world, in the most remote areas,
15:50 they would have access to custom tutoring
15:55 that is personalized to the way they think,
15:59 to their cultural background, to their interests.
16:04 And that's just amazing.
16:06 We normally don't really think about how we learn
16:10 or how we think until 10 years of education or so on.
16:15 And that's just such a missed opportunity.
16:17 And I really do believe that if you can push human knowledge forward,
16:21 you can push society and civilization forward.
16:25 And I mean, there's so many opportunities in healthcare,
16:31 thinking about solutions in climate change.
16:34 There's a long list of things that we haven't tackled.
16:37 And I think bringing these advanced AI systems in the world will help us a lot.
16:44 And obviously, there are always risks whenever you have something so powerful.
16:51 You have to deal with the downsides, with the misuse.
16:57 And if you think about the world around us,
17:00 it's all engineering and engineering carries risks.
17:04 And we have learned to build systems and build trust
17:12 and to figure out how to make engineering safe around us,
17:17 you know, the buildings, the roads, the bridges, everything.
17:22 And so this is quite similar,
17:24 where we have to build all this infrastructure that we trust
17:27 and makes the deployment of these technologies safe and useful.
17:33 And it's actually very congruent with business needs
17:37 because people want products that they can trust and that are safe.
17:41 - So, and I want to ask you one more question just on the safety side.
17:44 This is moving so fast,
17:46 faster than bridge building and, you know, real-life infrastructure.
17:51 I know you oversee the safety teams.
17:54 And one of the things that when you were on our cover last year,
17:58 we talked about was your belief that the unique structure at OpenAI
18:02 provides the necessary guardrails to make sure that you have safety in mind.
18:07 Obviously, there's been a lot of scrutiny.
18:09 There was a lot of upheaval late last year.
18:11 It's hard to remember that you were interim CEO for a little while there
18:15 because so much has happened.
18:18 Tell us why and how you think that you believe that the right guardrails,
18:27 the right structure on the safety side is still in place.
18:34 I assume you believe that or you wouldn't be there at this point.
18:38 - Well, so I think you're referring to the board structure, right?
18:43 Yeah, okay.
18:45 So the way that OpenAI was set up was, you know,
18:51 the question has always been what is the best administrative structure,
18:56 the best incentive structure alignment and design
19:00 so the company can fulfill its mission in the best possible way.
19:04 And there have been a couple of iterations, you know,
19:07 as we work towards the mission and we learn more,
19:12 it is a process of discovery.
19:14 I mean, we're literally in the pursuit of discovery.
19:16 We're doing research.
19:18 And with that, you have to kind of learn and adjust and make changes.
19:22 So there was the initial change from non-profit to capital profit
19:27 in order to actually be able to fund the supercomputers
19:31 and hire talent while, you know, putting the mission first.
19:37 And so that is the goal, always put the mission first.
19:41 But as we discovered in November, you know,
19:44 with a non-profit board structure that we had,
19:47 it doesn't put...
19:50 The non-profit board structure didn't have accountability to anyone but themselves.
19:56 And we're building and deploying a technology that will affect everyone.
20:00 So ideally, there would be, you know, some sort of accountability on the board as well.
20:08 And our current board,
20:13 which is very experienced in non-profit and larger companies,
20:19 they are doing a lot of thinking on the governance structure
20:23 and what is the best way that you can balance the incentives
20:27 and put the mission first while having a lot of accountability.
20:32 OK.
20:34 Well, as I said, we will not get to, you know, a tiny fraction
20:37 even of the questions I'd love to ask you.
20:39 But I really, really appreciate you taking time
20:43 and sitting here and just being so thoughtful.
20:46 There's so much interest in what you guys are doing.
20:49 And we appreciate you coming here and talking to us.
20:51 - And... - Thank you so much.
20:52 With that, we'll let everybody eat.
20:55 Thank you so much.
20:57 Thank you, Mira.
20:58 (Applause)
21:01 [BLANK_AUDIO]