• last year
If 2023 was the year artificial intelligence became a household topic of conversation, it’s in many ways because of Sam Altman, CEO of the artificial intelligence research organization OpenAI. Altman, who was named TIME’s 2023 “CEO of the Year” spoke candidly about his November ousting—and reinstatement—at OpenAI, how AI threatens to contribute to disinformation, and the rapidly advancing technology’s future potential in a wide-ranging conversation with TIME Editor-in-Chief Sam Jacobs as part of TIME’s “A Year in TIME” event on Tuesday.

Category

🗞
News
Transcript
00:00 Sam, thank you again for being here backstage.
00:02 Sam said this was the most fun thing
00:03 he'd be doing all week, so I hope you don't change your mind.
00:05 - I said this is the only fun moment
00:06 I've had in the whole New York trip.
00:08 This is great.
00:08 - We'll have a lot of questions for you.
00:11 Appreciate your time.
00:13 The first one, I think, on the minds of many people
00:16 in the room tonight, what the hell happened?
00:19 (audience laughing)
00:22 - It's a good opener.
00:27 - Thank you.
00:28 A lot of things.
00:30 Honestly, it's been a crazy whole year.
00:33 In the context of everything that has happened to us
00:36 this last three weeks or month or whatever it's been,
00:39 it stands out, but not as much as you would think it should.
00:42 We kind of went from this unknown research lab
00:45 to this reasonably well-known tech company in a year,
00:49 and I think that takes most companies 10 years.
00:52 That's been a wild experience to live through.
00:54 Of course, these last few weeks have been particularly crazy
00:58 and sort of painful and exhausting
01:00 and happy to be back to work.
01:02 To say something empathetic,
01:07 I think everybody involved in this,
01:09 as we get closer and closer to super intelligence,
01:13 everybody involved gets more stressed and more anxious,
01:16 and we realize the stakes are higher and higher.
01:19 And I think that all exploded.
01:23 - How do you think this moment has changed OpenAI?
01:27 - It's been extremely painful for me personally,
01:30 but I actually think it's been great for OpenAI.
01:32 We've never been more unified.
01:37 We have never been more sort of determined and focused.
01:40 And we always said that some moment like this would come
01:43 between where we were in building AGI.
01:47 I didn't think it was gonna come so soon,
01:49 but I think we are stronger for having gone through it.
01:51 Again, I wouldn't wish it on an enemy,
01:54 but it did have an extremely positive effect on the company.
01:59 - And what did you learn from it?
02:03 - I haven't fully recompiled reality yet.
02:10 I didn't, I haven't had the time
02:12 to emotionally process all of this,
02:14 because it was like, it all happened so fast,
02:17 and then I had to come back in and pick up the pieces,
02:20 that I haven't had time to sit down and think,
02:24 or have time to sit down and really reflect
02:26 as much as I would like.
02:27 But I would say the most important thing that I learned,
02:32 you know, a thing I had always heard,
02:35 it's like a cliche or whatever,
02:37 is that your job as a CEO is how much,
02:42 like the people you hire,
02:44 and how much you sort of develop and mentor your team.
02:48 And the proudest moment for me in all of this craziness
02:52 was realizing that the executive team
02:54 could totally run the company without me.
02:57 I can go retire, OpenAI will be fine.
02:59 And I'm super proud of the people to do that,
03:01 and to watch them work at a time
03:03 where I couldn't really talk to them,
03:06 but they did an amazing job, really made me very proud.
03:10 And it also made me very optimistic,
03:11 because I think as we do get closer
03:14 to artificial general intelligence,
03:16 as the stakes increase here,
03:20 the ability for the OpenAI team to operate
03:23 in uncertainty and stressful times is like,
03:27 really, that should be of interest to the world.
03:30 - You've described how high the stakes are here.
03:33 What do you say to someone who says,
03:36 this company brought itself to the brink of self-destruction.
03:39 How can we trust its leader,
03:41 and how can we trust its company
03:42 with this transformative technology?
03:44 - We have to make changes.
03:48 We always said that we didn't want AGI
03:51 to be controlled by a small set of people.
03:54 We wanted it to be democratized,
03:55 and we clearly got that wrong.
03:57 So I think if we don't improve our governance structure,
04:00 if we don't improve the way we interact with the world,
04:03 people shouldn't, but we're very motivated to improve that.
04:06 - On those changes, your former co-founder, Elon Musk,
04:11 former Person of the Year,
04:14 has described OpenAI as a closed-sourced,
04:17 maximum-profit company,
04:19 effectively controlled by Microsoft.
04:23 Is Elon wrong?
04:24 - On all of those topics.
04:27 - And any others, or do you wanna--
04:31 - I actually, in spite of his constant attacks on OpenAI,
04:36 I'm very grateful that Elon exists in the world.
04:40 - Why?
04:41 - Because I think he's done some amazing things.
04:43 I think the transition to electric vehicles
04:46 is super important.
04:47 I think getting to space is super important.
04:52 And I'm grateful for those things.
04:57 We're definitely not maximum-profit seeking,
05:00 although you could talk to Elon
05:01 about some of his ventures for that one.
05:04 And we open-source a lot of stuff.
05:07 We'll open-source more in the future,
05:09 and we're certainly not controlled by Microsoft.
05:13 And I think all that is something that someone can say,
05:18 but does not actually reflect the truth.
05:20 - A thought on a question about another competitor.
05:23 Google released Gemini last week,
05:26 a model that Google claims outperforms GPT-4
05:29 on many performance tests.
05:31 What do you make of Gemini,
05:32 and why did it take them so long to release it?
05:34 - I'm happy for more people to be making AI progress.
05:38 I think AI will be the single most transformative technology
05:43 of this era.
05:44 And so more people doing that, I think, is great.
05:46 When the big Gemini model, I forget what it's called,
05:49 I think Gemini Ultra,
05:50 when that gets released sometime next year,
05:52 we'll get to look at it.
05:53 I can weigh in on it then.
05:55 Certainly there's been a lot of confusion
05:57 around the metrics, but I'm sure Google will do great work.
06:00 - In an interview earlier this year with Edward Felsenthal,
06:04 you said, my predecessor as Time Editor-in-Chief,
06:07 you said, "I am a Midwestern Jew.
06:09 "I think that fully explains my mental model."
06:11 (audience laughing)
06:12 - I think that's like--
06:14 - Is that true?
06:14 You still feel that way?
06:15 - No, I think it's like a compressed one sentence
06:18 to explain everything about what I,
06:20 I think that's pretty good.
06:22 - As a New England Jew, I have to ask you,
06:26 how does Judaism shape your worldview,
06:30 and what has it been like to have been a Jewish leader
06:32 since October 7th?
06:34 - You know, if you had asked me this question
06:36 at the beginning of the year,
06:37 I would have said, there's all of these subtle
06:40 but important cultural things that have, I think,
06:42 shaped my worldview and how I act,
06:45 and how I sort of live my life.
06:47 And I wouldn't have talked about anything other than that.
06:53 And one of the weird things about being Jewish
06:56 and getting internet famous is like,
06:58 most of your online experience is people saying
07:01 like horrible things about Jews.
07:03 And I don't know if that was always the case,
07:05 or if that's like ramped up,
07:07 but that's certainly been my experience this year.
07:09 And on double time since the last couple of months.
07:14 I think I was just like wrong to be so dismissive of this.
07:20 I was like, look, antisemitism, we're done with that.
07:22 The world has moved on.
07:24 There's other problems, let's talk about those.
07:26 And I have really seen in this last year,
07:29 and particularly this last couple of months,
07:30 that I was just completely wrong about that.
07:32 And it's like a sad, sad thing for the world.
07:35 - You're someone who likes to take on intractable problems
07:39 as you've thought about that.
07:42 How do you think about solutions towards that?
07:44 - That one seems harder than AGI.
07:48 - Speaking of difficult problems,
07:53 next year is a historic year for democracy.
07:55 There will be elections in 40 countries.
07:58 Are you concerned at all about AI's ability
08:01 to contribute to disinformation?
08:03 And do you think there are specific concerns
08:05 that we're not taking seriously enough?
08:10 - Yeah, so I think AGI will be the most powerful technology
08:15 humanity has yet invented.
08:17 And like any other previous powerful technology,
08:21 that will lead to incredible new things.
08:23 I think we'll see education change deeply
08:27 and improved forever.
08:29 I think the kids that start kindergarten today,
08:31 by the time they graduate 12th grade,
08:34 will be smarter and better prepared
08:37 than the best kids of today.
08:39 I think that's great.
08:40 I think we can talk about the same thing
08:41 in a lot of other things.
08:42 Healthcare, people who program for a living,
08:44 a lot of other knowledge work.
08:45 But there are gonna be real downsides.
08:48 And one of those, I mean, there'll be many
08:51 that we'll have to mitigate,
08:52 but one of those is gonna be around
08:54 the persuasive ability of these models
08:59 and the ability for them to affect elections next year.
09:03 And I think we're gonna really confront
09:04 something quite challenging.
09:06 - So what's that gonna look like?
09:08 - You could, so right now,
09:11 it's like troll farms in whatever foreign country
09:17 who are trying to interfere with our elections.
09:18 They make one great meme, and that spreads out,
09:21 and all of us see the same thing
09:22 on Twitter or Facebook or whatever.
09:24 That'll continue to happen, and that'll get better.
09:27 But a thing that I'm more concerned about
09:29 is what happens if an AI reads
09:31 everything you've ever written online,
09:34 every article, every tweet, everything,
09:36 and then right at the exact moment
09:38 sends you one message customized for you
09:41 that really changes the way you think about the world.
09:44 That's like a new kind of interference
09:47 that just wasn't possible before AI.
09:50 - I find in most conversations with you,
09:52 people are processing their fears, so if you'll allow me.
09:55 Is AI good or bad for media?
09:57 - One thing I would say is no one knows what happens next.
10:02 I think the way technology goes,
10:06 predictions are often wrong.
10:08 The future is subtle and nuanced
10:11 and dependent on many branching probabilities.
10:13 So the honest answer is I don't know.
10:17 But I think it's gonna be more good than bad.
10:20 It will be bad in all sorts of ways.
10:22 But I think it nets out to something good.
10:24 As people have more free time, more attention,
10:28 and also care more about the people they trust
10:31 to help them make sense of the world,
10:33 to help them decide what to trust
10:37 and how to think about a complicated issue,
10:40 I think they're gonna rely and care more
10:42 about their relationship with someone
10:46 in the media more and more
10:47 and care more about high-quality information
10:49 in a world of massive amounts of generated content.
10:52 So I think it should be net good,
10:54 but it will be different.
10:55 - How do you think about your company's role
10:57 and your role in helping to preserve an ecosystem
11:00 where high-quality information remains?
11:02 - It's obviously super important to us,
11:06 but that's like a sort of empty statement.
11:08 The kinds of things that we try to do
11:10 are build tools that are helpful to people.
11:12 To people in media, people in other industries,
11:16 if you had asked us five years ago what was gonna happen,
11:20 we would have said we will be able to build
11:23 trusted, responsible AI, but fundamentally,
11:27 it's gonna be going off and doing its thing.
11:30 And now, I think we see a path to what we do
11:35 is instead build tools for people.
11:37 And we put these tools out into the world,
11:39 and people, media or otherwise,
11:41 use them to architect the future.
11:43 And that is the most optimistic thing
11:47 I think we have discovered in our history.
11:50 And the safety story changes in that world,
11:55 the way that we are a responsible actor in society
11:57 changes in that world.
11:59 I think we now see a path where we just empower
12:01 everyone on Earth to do what they do more and better.
12:05 And that's so exciting.
12:07 That's so different than how I thought AI was gonna go,
12:09 but I'm so happy about it.
12:11 - Speaking about where AI is going to go,
12:14 one of the challenges I think we had
12:15 in talking about the work that you've done
12:17 and that OpenAI is doing is helping people understand
12:21 your vision of what artificial general intelligence
12:24 means for our future.
12:26 And so can you help this room understand
12:28 how their lives will be changed?
12:29 You said you can't predict the future,
12:31 but as we move forward, what will AGI mean for all of us?
12:34 - I think the two, I mean, there's many important forces
12:40 towards the future, but I think the two most important ones
12:43 are artificial intelligence and energy.
12:46 If we can make abundant intelligence for the world,
12:49 and if we can create abundant energy,
12:51 then our ability to have amazing ideas for our children
12:55 to teach themselves more than ever before,
12:57 for people to be more productive,
12:59 to offer better healthcare, to uplift the economy,
13:01 and to actually put those things into action with energy,
13:05 I think those are two massive, massive things.
13:10 Now, they come with downsides, and so it's on us
13:14 to figure out how to make this safe
13:16 and how to responsibly put this in the hands of people,
13:19 but I think we see a path now where the world
13:23 gets much more abundant and much better every year,
13:26 and people have the ability to do way, way more
13:29 than we can possibly imagine today.
13:32 And I think we're, I think 2023 was the year
13:35 we started to see that.
13:36 2024, we'll see way more of it.
13:38 And by the time the end of this decade rolls around,
13:41 I think the world is gonna be in a unbelievably
13:44 better place.
13:45 It sounds sort of like silly and sci-fi optimism
13:48 to say this, but if you think about how different
13:50 the world can be, not only when every person has a,
13:55 you know, today they have like Chachi B.T.
13:57 It's like not very good, but next they have like
14:02 the world's best chief of staff, and then after that,
14:05 every person has like a company of 20 or 50 experts
14:10 that can work super well together, and then after that,
14:12 everybody has a company of 10,000 experts in every field
14:15 that can work super well together.
14:17 And if someone wants to go focus on curing disease,
14:19 they can do that, and if someone wants to focus on
14:21 making great art, they can do that.
14:24 But if you think about, you know, the cost of intelligence
14:27 and the quality of intelligence, the cost following
14:29 the quality increasing by a lot, and what people
14:32 can do with that, it's like a very different world.
14:34 It's the world that sci-fi has promised us for a long time,
14:37 and for the first time, I think we get to start
14:39 to like see what that's gonna look like.
14:41 - Two quick questions to wrap up this conversation,
14:44 and again, thank you for being here.
14:46 Disqualify yourself in consideration, and remember,
14:50 there are a lot of CEOs in the room tonight.
14:52 Who should be the 2023 CEO of the year, if not you?
14:55 - There are a lot of good choices for that.
14:59 I mean, I'm hugely biased on this.
15:03 I do think AI was sort of the most exciting, impactful
15:09 thing to happen this year, so I'd give it to one
15:10 of the other AI companies, but I'm like really biased.
15:12 - Smooth answer.
15:13 A much harder question to end this conversation.
15:19 What is your favorite Taylor Swift song?
15:21 - That is a hard question.
15:24 To pick like a not super popular one,
15:26 I would say "Wildest Dreams."
15:28 But-- - Oh, I like that.
15:30 - But all of the like, all of the super popular ones
15:32 are great too.
15:33 - Well, Sam Altman, Time's 2023 CEO of the year.
15:37 Thank you very much. - Thank you very much.
15:38 Thank you.
15:39 Thank you.
15:40 [ Applause ]

Recommended