• last month
For the first episode in our new miniseries about the impact of AI in our everyday lives, we chat with Steven Johnson, a longtime author who has spent the last couple of years at Google working on an AI research and note-taking tool called NotebookLM. We talk about whether AI can really help us learn better, how Google has tried to make NotebookLM more accurate and helpful, and whether AI-generated podcasts are the future of learning.
Transcript
00:00Welcome to the Vergecast, the flagship podcast of Infinite Context Windows.
00:07I'm your friend David Pierce, and this is the first episode in our new miniseries all
00:12about AI and real life.
00:15We did a few episodes on this subject earlier this year, and it continues to be a thing
00:19that we're talking and thinking a lot about.
00:21For all the big, heady talk of how AI will either change everything or kill us all or
00:27make nobody ever have to work again or make us all have to work training robots.
00:32What is any of this actually good for, like right now?
00:36That's what we've been trying to figure out.
00:38For today's episode, I'm talking with Steven Johnson, who is a personal favorite author
00:41of mine.
00:42He's written 14 books over the years, and he actually told me that he can name them
00:46all in order off the top of his head, which I believe and also find very impressive.
00:51And some of those books are books you've probably heard of, like Where Good Ideas Come From
00:56and How We Got to Now.
00:58But in addition to all of that, Steven has also spent the last two years working at Google
01:02on a project called Notebook LM.
01:05Notebook LM, if you've never heard of it, is an experimental thing out of a team called
01:09Google Labs.
01:10It started out as a thing called Project Tailwind a couple of years ago, and the idea has always
01:15been to make an AI-powered tool basically for making sense of your notes.
01:20In Notebook LM, which they all call Notebook, so I'll just start calling it Notebook, you
01:24first upload a bunch of documents or links to websites or PDFs or whatever else, and
01:30the tool builds sort of a corpus of stuff.
01:33The idea is you put a bunch of related things into a notebook in Notebook.
01:38Then you can ask questions about those documents, or you can have Notebook build you an automatic
01:43study guide or an FAQ of the information in those documents.
01:46You can have it find you stuff in those documents, all that kind of stuff.
01:50It's sort of a note-taking tool, but mostly it's a research tool.
01:54Steven calls it a tool for understanding things, which I like a lot.
01:58I've been covering and using Notebook for a long time, but I wanted to have Steven on
02:02now because Notebook is kind of going legit.
02:07I assumed, if I'm being completely honest with you, that it was like a neat experiment
02:11that would eventually die because everything dies at Google, or at best it would just be
02:15a tiny feature buried in a menu of Google Docs or something.
02:19But Notebook is growing, and it's expanding, and it's actually starting to do some really
02:23interesting new stuff.
02:25Recently, they launched a new feature called Audio Overviews, which generates a podcast
02:30hosted by two chatbots based on whatever documents you upload.
02:34It is wild.
02:36Actually, you should just hear this.
02:38So I made a notebook in Notebook with a bunch of stuff from the ongoing US versus Google
02:42ad tech trial, and here's just a few seconds of the podcast it generated.
02:47And that's where header bidding enters the picture.
02:50Ah, header bidding.
02:51Yes.
02:52This was their attempt to kind of outmaneuver Google, like finding a side entrance into
02:56the auction.
02:57That's a great way to put it.
02:58So with header bidding, publishers could essentially offer their ad space to multiple ad exchanges
03:03at the same time.
03:04Look, I don't know if that's good or bad.
03:05I don't know if all of that information is even true, but I'm fascinated by the idea
03:10of a tool that tries to make it easy and automatic to learn almost anything in whatever style
03:16works for you.
03:17And Steven is just as fascinated by it.
03:19So I figured we should talk about it.
03:22All that is coming up in just a second.
03:24This is the Verge cast.
03:26All right, we're back.
03:28Let's get into my conversation with Steven Johnson.
03:31I figured the best place to start with the Notebook LM story was just at the beginning.
03:36So that's where we started.
03:37How is it that an author comes to be a Google employee working on an AI product?
03:43Yeah, it's a really interesting story, I think.
03:46So while I have spent most of my career writing books about science and technology and history,
03:53I've always had this side interest in using the technology to help me with the book writing
04:00process, with the research process.
04:03Tools for Thought, that whole tradition has been a big influence on me.
04:07And I've always been kind of an early adopter of, you know, I use this program called Devon
04:11Think, organizing all my notes and quotations from books that I'd read in the 2000s.
04:16Well, that's a whole nother Verge cast we're going to have to do at some point, so get
04:19ready for that.
04:20Yeah, yeah.
04:21So, you know, I wrote about it in kind of blog posts, and I wrote a couple things for
04:24the Times about this, and it shows up in my book, Where Good Ideas Come From.
04:28I talk about using tools like this and was a big Scrivener user and evangelist and all
04:32that stuff.
04:33But I've always been interested in the software side of writing and thinking and research.
04:39And in the spring of 2022, you know, so six months before the chat GPT moment, I wrote
04:46a very long piece from the Times magazine about language models in general.
04:51It was effectively just making the argument like, forget about AGI or any super intelligence
04:55or anything like this.
04:56These models have basically learned how to communicate in coherent language, and they
05:01understand what we're saying.
05:03And whatever else, like, this is going to create all these new possibilities.
05:06Yeah.
05:07That piece was very good.
05:08And I remember a bunch of the response you got to that was from people being like, this
05:13is bonkers.
05:14He's out of his mind.
05:15There's no way any of this stuff is going to be that big.
05:17And then, boy, did you time that correctly.
05:19Well, I did accept that it was painful.
05:24Like I'm very conflict averse, David, if I can be honest with you.
05:28And there was, I mean, a lot of people did like that piece, and I think were inspired
05:32by it.
05:33But there was definitely a lot of comments from people saying like, oh, he fell for the
05:37hype.
05:38You know, this stuff is just autocomplete on steroids, and I can't believe he's so
05:42naive that he got excited about this thing.
05:45The piece addressed a lot of the objections and criticism and took them very seriously.
05:49But it was like, the one thing you can't do is dismiss this technology.
05:51Like something fundamental has just happened.
05:54And we're going to spend years figuring out like how we apply it and how we deal with
05:57the upside and the potential downsides and like, just take it seriously, people.
06:03So, you know, if I timed it later, I wouldn't have maybe had as much of the pushback, which
06:06was hard.
06:07I couldn't enjoy that piece going out into the world, let me put it that way.
06:10Well, it was a good piece.
06:11I liked it.
06:12Well, I appreciate that now.
06:14So right around that point, kind of a new division inside of Google called Google Labs,
06:19or had been a previous Google Labs, and this is kind of a new one, was getting spun up.
06:24And at the time, a guy named Clay Bavor was running it.
06:28And Josh Woodward, who runs it now, was talking back and forth with Clay.
06:32And Labs had this kind of ethos of the division was basically, we want to create a place where
06:38we can do faster and more nimble product-focused experiments with new technologies, where it's
06:45not just kind of research, but it's not working within the existing mature products.
06:50And when something new comes along, we can experiment and build things very quickly.
06:55So somewhere in between the like 20% time project and the like full-blown new Google
07:00product.
07:01Yeah, it was just this little hole that didn't quite exist.
07:04And there were so many interesting new technologies coming out, particularly the language models
07:07that it seemed like this was a time for a lot of experiments to bloom, right?
07:11And they had had this idea that maybe they could also have an ethos of co-creation, where
07:15they bring in outsiders.
07:16So if they're making a product that involves music, there should be a musician in the room
07:22from the beginning.
07:23And it's not something that kind of they build, and then they show to the musicians, or they
07:27have, you know, UX interviews with musicians.
07:30There's actually someone there who represents that kind of profession in the room for the
07:35life of the product.
07:36And so I was kind of the guinea pig for this approach.
07:39And so they reached out, Josh and Clay had read some of my books and had read that Times
07:43article.
07:44And so they reached out and said, hey, you know, you've been dreaming of this ideal software
07:51tool that helps you organize your thoughts and helps you write and helps you formulate
07:55connections and, you know, brainstorm.
07:59We think we can now do it with language models.
08:02Like this thing you've been chasing, literally, you know, I've been chasing this since I was
08:06in college in the late 80s, when HyperCard came out, you know, for the Mac, like, I'm
08:12an old person.
08:13I've been after this for a long time.
08:15And they were like, look, I think if you came to Google, you know, come part-time initially,
08:20and we have a small team, and we can, you know, we can build something.
08:23And I thought that sounded like an amazing journey.
08:26Honestly, I thought we'll build a prototype.
08:29It'll be fun.
08:30I'll meet some interesting people, but nothing will come of it.
08:32But it was still, you know, a fun ride to go on because of my passion for this.
08:37And then we built something that was originally called Tailwind, Project Tailwind.
08:43But from the beginning, from very, like, day one of the idea, there was always this sense
08:49that this was not just going to be an open-ended conversation with a language model.
08:54It was always going to be about the model being grounded in the sources and the information
08:59that you gave it.
09:01And it was really about respecting the original kind of human-authored information, whether
09:09it's a book or your own notes or a scholarly paper or your syllabus for your class, and
09:16basically saying to the model, take this information, which is personally relevant to me and is
09:22verifiable in some way or factually, you know, trustworthy, and base your answers and
09:28everything I ask you to do on that information.
09:30And that was the seed of the, you know, we had a version of that in August of 2022, like,
09:36you know, on my fifth day at Google.
09:38It preceded me, I should say.
09:40There was a program called Talk to a Small Corpus that was about a month underway when
09:47I got there.
09:48And then one of the first things we did, we put in my book, Wonderland.
09:52And we just, like, I had this experience of kind of chatting with the model and having
09:57it answer based on information in my book.
10:00And you know, that was one of those moments where, like, a lot of possibility just opened
10:05up.
10:06Well, actually, the weirdest moment, I would say, was a little bit later over Christmas
10:13break, the internal version of what became BARD was kind of released to some of us inside.
10:21Over Christmas break, I was spending a lot of time with BARD.
10:24My family had gone off skiing somewhere at I Don't Ski, so I was just like home alone.
10:27And BARD came out.
10:28I was like, well, I just now have like 18 hours a day to talk to my new friend, BARD.
10:33So I would occasionally just start with just a little preamble to get its bearings.
10:38And I would say, I would like to discuss Stephen Johnson's book, The Ghost Map.
10:42And this was just based on its kind of training data.
10:44This isn't with source grounding.
10:46And so one day I do this, and BARD's like, oh, yeah, I would love to discuss that.
10:49That's a fascinating book about medical mystery in the 19th century that explores the impact
10:54of cholera and epidemiology on the history of London and the history of cities generally.
10:58And so I'm like, oh, well, thank you very much.
11:01I'm actually the author of that book.
11:04And BARD said, oh, my gosh, I am so sorry.
11:08I can't believe I didn't recognize you, Mr. Johnson.
11:11And I was like, I know, but there was no way you could have recognized me, right, BARD?
11:18But it was just one of those moments where I was like sitting alone in my study, having
11:22this conversation with an algorithm that's apologizing for not recognizing me when it's
11:27a fan of my book.
11:28And at one point it said, I'm just so excited at the opportunity to get to work with people
11:33like yourself.
11:34And I was just like, this is strange.
11:36So there were a lot of like uncanny moments like that.
11:40But in a way, you know, there was part of that that I also recognized was an illusion,
11:45right?
11:46It was trained on the way that people react when they meet people or fail to recognize
11:51someone.
11:52And so it responded in that way.
11:53It obviously had no inner life.
11:55It did not.
11:56It was not actually like embarrassed by the fact it was meeting me.
11:58It was just kind of play acting at that.
12:01So to me, the stuff that really was mind blowing was just its ability.
12:07And this really kicked in, you know, for us when we switched to Gemini, its ability
12:13to extract and see patterns in large amounts of information.
12:19You know, I have this notebook where we have something like, you know, almost a million
12:25words of transcripts from the NASA oral history project.
12:29So just interviews with like, you know, the NASA astronauts and flight directors and things
12:34like that.
12:35You can go into it and say, I'm interested in, you know, I'm working on a documentary
12:40about the early Apollo program, and I'm interested in the emotional connections between the participants,
12:48particularly the astronauts.
12:49So can you create a detailed guide of all the points in these transcripts where anything
12:53that seems interpersonal or emotional comes up?
12:56Give me a summary of that section.
12:59Give me a direct quote from the section and, of course, include citations so I can click
13:02immediately and go back and read the passage in its original form.
13:07And it will just do it.
13:08Like, it'll take a little bit of time because it's a complicated query, but like, it can
13:12take something that would have taken 40 hours to compile that document.
13:18Like, I mean, you know what it's like working with information like this.
13:20It will generate an incredibly convincing and accurate first draft with grounded citations
13:26to all the passages in about 45 seconds, maybe.
13:30So it's literally, you know, a thousand times faster than it would have been before to do
13:35that kind of thing.
13:36And it's not pretending to be a person.
13:38It's not pretending to have feelings about it.
13:40It's just grabbing that very subtle kind of collection of information that is not anchored
13:47in any keyword, right?
13:48You know, it's not looking for mentions of emotion.
13:51Like, it just understands that these other things where they talk about their kids, that's
13:54an emotional moment.
13:55And it's able to kind of collate that way.
13:57When it started being capable of doing that, that was the point for me where I was like,
14:02oh, this is really a fundamental change.
14:05And that happened with Gemini, which was what, earlier this year?
14:09Yeah, we switched to Gemini 1.0 in December, and it was great.
14:15But it was really 1.5 Pro and the bigger context window.
14:19I mean, the other thing that happened to me, by the way, I'm a nice writer, dude.
14:22And now I hear a new million token model, and it's the most exciting thing in my life.
14:28I cannot wait to get my hands on it.
14:30So when we got first access to that model, I took the entire text of my book Infernal
14:37Machine, which just came out a couple months ago, but was in manuscript form at that point.
14:42So this is really important.
14:43None of the words from my book were in the training data for the model itself.
14:48So it had never been published.
14:49It had never been discussed about in any coverage.
14:52So the facts, it's a book of history, so the facts are probably in some form in the models
14:57training data, but the book itself and the way that I presented the facts was not.
15:02And the thing that had struck me about the early discussions of large context was that
15:07people were using it to do these kind of needle in a haystack tests where they're like, oh,
15:12we gave it, you know, the full text of Moby Dick, but we added this one line that was
15:16different and it was able to find it, right?
15:19And which is cool.
15:20And, you know, you couldn't do that before.
15:21But the point that I was so obsessed with is that once you have the full text of something
15:27like a book in context, it means that the model not only can find obscure things in
15:33the text, but it understands the sequence of the text and it can understand large kind
15:39of movements of like cause and effect or change over time in a document, which you can't get
15:44if you're just giving it isolated paragraphs and snippets of things.
15:48And so I put Infernal Machine into this version of Gemini and I basically asked it inside
15:55a notebook, LM, I was like, give me, I was like, I'm interested in the way that Johnson
16:00uses suspense in this book.
16:02I would like you to list four places where he deliberately withholds information from
16:08the reader in order to pique their interest.
16:12Describe the passages where he does this, quote from them, and then explain the future
16:17information later in the book that he's obliquely referring to that doesn't arrive for another,
16:21you know, 50 pages or whatever.
16:23And it just absolutely nailed it.
16:25The first example he gave was exactly what I would have picked as the ultimate kind of
16:30form of suspense, which is an allusion to a ticking, mysterious ticking suitcase that
16:35happens in the preface that doesn't, the mystery behind it doesn't get explained for another
16:39200 pages.
16:41And so, like, think about that as like a search query.
16:44Like, search is, find examples where something isn't mentioned and isn't mentioned in a very
16:51provocative way.
16:53And then fill in the blanks of, you know, the thing 200 pages later that it's obliquely
16:57referring to.
16:58Like, that, you know, again, the model is not understanding the book on some level because
17:04understanding is a word that we associated with consciousness and with sentience and
17:08with the inner life of like what it means to read a book.
17:11But the model is doing the thing that human understanding does.
17:15Yeah.
17:16You know, and that's, that's an important distinction, I think.
17:19No, I think, I think that's right.
17:20One thing I heard somebody say not that long ago is that a meta, the better metaphor they
17:25liked then it's understanding is just that it can see the whole thing at the same time.
17:28Yeah.
17:29And I always thought that was really great.
17:30It's like, if I can see one page at a time, this thing can see all 300.
17:35And it doesn't, it's not better at knowing those things.
17:37It's just literally by being able to see it all at once, the number of things that
17:43suddenly become very basic because you can see them all together is very powerful.
17:48And I just, that makes it both sort of simpler and cooler all at the same time, which I really
17:52like.
17:53But you, you bring up this tension that I think is fascinating with all AI stuff, which
17:57is that there is a set of things that are just sort of remarkable that they're possible,
18:03right?
18:04And it's, you see this with every new model that comes out and every new product that
18:07comes out.
18:08One of the first things everybody does is just try wild stuff and some of it works and
18:11some of it doesn't and some of it's amazing and some of it's dangerous and whatever.
18:14So there, there's the sort of novelty factor of it that I think is still so rampant in
18:18everything AI right now.
18:20And then there's the question of what is any of this actually for?
18:25And I think one of the things that I've liked about notebook and sort of watching it develop
18:30over time is it feels like your sense of not just what this can do, but what it's for has
18:36gotten much better over time.
18:39And I wonder if part of that is like, does the novelty of moments like that start to
18:42wear off and you start to realize like, okay, that's cool, but no academic is actually going
18:47in here searching for what's missing from this book.
18:50Like if that's not like a thing most people need in their lives and you start to sort
18:53of wind it back to like, okay, how do we bring that sort of enabling technology to things
18:57people actually do need?
18:59Or maybe is the craziness the point and we've just never been able to do it before.
19:03So now we're trying to discover it all, all at once.
19:07So I guess that, especially in those early days, really before Gemini 1.5 kind of lights
19:11your brain on fire, like what, what is that process of figuring out not just sort of what,
19:16what can this thing do, but like, what are we building this tool for?
19:19Yeah, it's such a great question.
19:21So many different ways to get into it.
19:23I mean, I think in the very early days, there was a sense that we were building something
19:29we knew was not going to really work because the context wasn't big enough, the model wasn't
19:34sophisticated enough, but you could see where things were going.
19:38And so, so much of what we really focused on from the beginning is like, what is the
19:41proper interface for this kind of thing?
19:44Like if, you know, it's not just about a text message thread, like surely there are other
19:48kind of forms of UI.
19:49So like if you were going to build a product from the ground up, knowing that it was going
19:53to be built around a language model, like what would, what would it look like?
19:56Let's start from scratch.
19:57And that's a very fun, open canvas to have, but it also means you make a lot of stuff
20:02that is not very good.
20:03It doesn't really make sense and it doesn't work because the model isn't caught up to
20:06it yet.
20:07So, so there was a lot of experimenting with that.
20:09I think that we had an advantage in the early days in that I was just trying to drive it
20:15towards my very specific use case of like, I want a thing that has read all of my work
20:20and all of the quotes from books that have influenced me and can be a second brain to
20:25help me like remember those things and make connections and like, and so kind of author
20:29researcher mode and that we kind of built the first prototype with that.
20:33And then once we were able to kind of open it up to more users, I think then, then we
20:38were just constantly discovering all these amazing things.
20:41And a big addition to this was Risa Martin, our product manager, who, you know, has just
20:47been, she's just an incredible like listener to users.
20:51You set up this discord that, you know, was just so central, like at the very beginning
20:55we had a discord, which is not a normal thing to do in some ways.
20:59And we now have this amazing community and it's constantly filled with people being like,
21:03oh yeah, I saw this opportunity to use it in this way.
21:06So like our favorite one that completely came out of the discord was Dungeons and Dragons
21:11players started using it because they were like, I have these big, you know, campaigns
21:16that I've designed that, you know, I'm a dungeon master and I have like created this whole
21:21virtual fantasy world and it's filled with all this information and it's hard to keep
21:24track of, but I can load these documents into notebook LM and then I can just like ask any
21:29kind of open-ended question and it'll be like, how many hit points do I need to kill this
21:32orc or whatever it is?
21:33I'm not a D&D player.
21:35And they were using it that way and people were writing fantasy, also like world building
21:40kind of fantasy novels and they just needed, they had a kind of story Bible with all their
21:45characters and the backstory and everything like that.
21:46And it turned out that, you know, they'd never really had an interface that let them do
21:50that. And that was not something we were thinking about at all.
21:53So we've just been like, you know, once we got past that first little prototype stage,
21:58we've just been really listening like intently to like where people are trying to push the
22:03tool and then just like making it easier, making it so they don't have to push quite as
22:09hard to use the tool that way.
22:10Yeah.
22:11And that's, yeah, that's where we are.
22:12Well, it's funny. I mean, even thinking about you, you've written a couple of times over
22:17the years about your, your sort of endless, I think you call it the spark file, that is
22:22basically just like a mountainous document of all the good and bad story ideas you have.
22:27I'm paraphrasing, but I think that's right, right?
22:29They're all good. I don't know what the bad ideas, you're talking about David, that's
22:32flawless, perfect, buy that book now.
22:35There are many bad ideas.
22:36Ideas. And I think, like just listening to you describe the story Bible for the world
22:42building stuff like that, that actually is like a perfect down the middle use case for
22:46this in a way I hadn't even really thought about until just now that it is like, here's
22:49a bunch of stuff that I have decided one way or another, right?
22:52Like here, here are my inputs.
22:54Help me make things with that is actually like kind of an amazing and very difficult
23:00otherwise use case because it's like, oh yeah, how many I have to go and collate that
23:05piece of information, that piece of information.
23:06I think there's all kinds of like complicated things with how we think about the art on
23:11top of all of that.
23:12But that thing where it's just like, I want to tell you the rules and I want you to help
23:16me make games out of it feels awesome.
23:18Like I'm so much less conflicted about that than I am about so many things in AI.
23:23That's so cool.
23:24I love that.
23:25Yeah, well, we're trying to do the things that are less conflict.
23:28I said I was conflict diverse, so I just like try to steer towards this thing.
23:31But yeah, it's a great point that kind of like take this massive unstructured data and
23:36turn it into a set of kind of formats that help me do the job that I'm trying to do with
23:41some guidance for me.
23:42And this was, by the way, this was another great place where like Risa really saw this
23:47before I did, because I was thinking of it as I'm going to write my book.
23:51So I'm going to do all the content creation here.
23:53I just need to be able to surface the facts and make some connections, you know.
23:58But it turns out there's just all these places where you've got all your company
24:02documents and you want to create an FAQ for new employees.
24:07Like, no one is going to win a Nobel Prize for literature for creating that document.
24:12Like, and if you can get a first draft of that document in 45 seconds instead of in
24:18four hours, like that's a win, right?
24:20That's that that is good news.
24:22And so there are all these different workflows that are out there where there's
24:26massive information needs to be kind of like filtered in some way and turned into
24:30something else.
24:33All right, we got to take a break and then we will be back with more from Steven Johnson.
24:41This episode is brought to you by AWS.
24:44With the power of AWS generative AI, teams can get relevant, fast answers to pressing
24:50questions and use data to drive real results.
24:53Power your business and generate real impact with the most experienced cloud.
25:01All right, we're back.
25:02One of the things I've been tracking about Notebook for a while is how it approaches
25:07things like hallucinations and citations, because this isn't like a silly chatbot, you
25:12know, it's a research tool.
25:14It's not silly or acceptable or just a signal that it's early.
25:18If it's wrong, it's a problem.
25:20If it's wrong, it's useless, frankly, if it's wrong.
25:23So I asked even how they're trying to solve that very real problem in AI.
25:28So before we get too far away from it, I do want to talk about the the accuracy and the
25:32citations bit of it, because you signed yourself up for a pretty high bar on both of
25:37those things, both by virtue of like the people who are going to use your product and
25:42just what the product is, right?
25:43Like you don't really get to have the warning at the bottom of everything that says
25:47this thing makes some information up, like its entire purpose is not to make information
25:52up.
25:54How do you what have you guys done?
25:55I know Notebook was an early experiment in in RAG, which is a way of basically winnowing
26:02down some of these systems.
26:03But like what what have you guys done differently at Notebook to try and solve that?
26:07And I'm curious both on the underlying tech side and on the user experience side.
26:12Yeah, well, we tried.
26:14I kept calling it source grounding because I think that is a better name than RAG.
26:21I don't know, man. We all know GPT now, so I just I've given up.
26:24We're all we're all doing the acronyms now.
26:26So, you know, I think part of it was the fact that we were doing it from the beginning,
26:31like that that it started with that.
26:33It wasn't like, oh, let's build a job model and and oh, shoot, we need to be able to,
26:36like, you know, ground it in other documents like it was from the beginning we were doing
26:40that. And so we just had a lot of time to like iterate and explore the the Gemini models
26:47are really good at at source grounding.
26:49They just there's a lot of training sets.
26:51We contributed a bunch of them to like just given this document, answer questions
26:57factually based on the information that document.
26:59So we inherited like a great tool that, you know, we did very little, you know, kudos
27:05to the Gemini team for building a model that that is is much more faithful to the
27:10source material you give it.
27:12But we built it also, you know, this is this is one of these things where it's like
27:16it's its underlying model plus the UI is has always been like our mantra from the
27:22beginning. And both things are required.
27:25And so one of the key things that we've had pretty much from the beginning is that
27:30you can always read your sources in the app.
27:33And then with the release in June, we switched it over so that you now have inline
27:39citations to everything.
27:40So anything the model says has a little link.
27:42You can read the original passage if you hover over it, that shows you, you know, the
27:45source material for that. And you can click on it and you can go read the source in the
27:49app. Do you think that's enough?
27:53We've talked about so one of the things you can do right now, actually, with Notebook
27:57that we want to actually turn into a proper feature, but you can do right now is you can
28:01upload a bunch of, you know, kind of source material, factual source material, and then
28:06you can upload the article you're writing, for instance.
28:09And you can say, fact check this article based on these sources and suggest improvements.
28:14Oh, that's clever.
28:15That's a good idea.
28:16And yeah, it's amazing.
28:18And it will go through and be like, well, this is correct.
28:20This is potentially wrong.
28:21It will suggest and it'll have links to everything.
28:24And so to me, if I felt that the model were just hallucinating wildly in its responses,
28:33then I do not believe that, you know, just providing citations and the ability to kind of
28:37fact check manually and go back and see the original passage would be enough.
28:41But we've you know, we've been sitting there banging away at like quality reviews like
28:47constantly for the last year and a half.
28:49Like we have a whole, you know, sheets and sheets and sheets of of sample questions and
28:54sample documents.
28:55And we can just see that accuracy rate going up, you know, dramatically, particularly with
29:00with these latest models.
29:01And so right now, I feel like we're in a we're in a pretty good place.
29:05I rarely I honestly I rarely see Notebook LM just wildly hallucinate something.
29:12I mean, one thing that's really important, people may not know this.
29:14I take this for granted because I've been living in this product.
29:16If you load in a bunch of sources about the history of NASA and then ask a question about
29:20Taylor Swift, in general, Notebook LM will say, I'm sorry, your sources don't discuss
29:25Taylor Swift, so I can't answer this question.
29:27And obviously, the model knows a lot about Taylor Swift is probably a pretty big fan of
29:31Taylor Swift. But it's but it's been specifically, you know, instructed to not answer
29:36questions that are outside the source material.
29:38I mean, to a fault, I think sometimes, you know, one would like to bring in some outside
29:41knowledge, you know, and trust but verify with that.
29:44But we've erred on the side of like stick to the facts in these documents.
29:49And so with the increase in accuracy and with the UI of citations, there's more things we
29:54could do. We could make that fact checking feature, you know, double check this kind of
29:58as something. But I feel pretty good about where we are in terms of the quality side of
30:03that, where we are kind of state of the art is upload many, many documents, ask a complex
30:10question that involves like multiple kind of variables drawn across like multiple
30:15documents, get a deep, you know, long answer with citations, follow those citations to
30:21read in the original text.
30:22Like, I think that Notebook LM, you know, kind of does that flow as well as anybody
30:28right now.
30:29I feel like you just described like a like a personal Wikipedia.
30:32And I mean that as a compliment, like the thing that Wikipedia is best at is just being
30:38like a starting point to go learn about something on the Internet.
30:40Right. Like you open a Wikipedia page, you go click on all the references at the bottom
30:44and then you go read those references and you're off and running.
30:46And I feel like what you just described is like I can shortcut that with any process of any
30:51single thing that I want to learn more about.
30:52I just dump it in and I'm like, what's going on here?
30:54And it'll just be like, here's some stuff.
30:56I have so many things to say to that, David.
30:58You're going to have to give me 20 minutes because I OK.
31:00So one thing is I'm in the process of thinking about what I'm going to write the next book
31:04on. And so I created the this is just kind of second nature workflow for me now.
31:11But, you know, wouldn't have occurred to me a year ago.
31:14So I created a new notebook called The Next Book.
31:16And whenever there's an idea that kind of comes to my mind or an article I read or
31:19something like that, I dump it into that notebook.
31:20It's the new Spark file.
31:21And but it's focused on this project of like, what should the next book be?
31:25And so the other day, I, you know, late night, I have this kind of thing I do because I
31:32have no life. My children have all gone to college and I have nothing to do with sit
31:35around. You just talk to Bart, it's cool.
31:36I talk to Bart. So I was like, I wonder, you know, has there been a good book written
31:41about the anti-nuclear movements, anti-nuclear power movements of the 60s and 70s?
31:46Because that's an interesting case where we like stop the technology partially in its
31:49tracks and maybe made a mistake.
31:50And, you know, how do we interpret that?
31:52And so I just went and grabbed like two Wikipedia pages.
31:58In the past, I would have like, you know, started by reading through the Wikipedia
32:02pages, but I brought it into notebook and I was like, I'm Stephen Johnson.
32:05I'm writing, thinking about writing a book, you know, in the mode of my other books,
32:08like Infernal Machine and Ghost Map, potentially about the anti-nuclear powers.
32:11Take a look at these. Like, what do you think of the, are there any interesting storylines
32:14that there would be good starting places?
32:16There would be good starting places, like what, you know, what would be good there?
32:20And it just does it like it, you know, goes through and it's like, well, you can focus
32:24on this period. This figure is kind of interesting, whatever.
32:26And then I went and read all the, you know, I'm going to read the material.
32:29But as a first glance, like inroads the material, it's amazing for that kind of
32:36exploration. The experience of navigating through a book, through a conversational
32:42interface is really interesting.
32:44And it's one of these things where like, you know, until now, if you wanted to explore
32:51the ideas in an author's work through a conversation, you could only do that by
32:56finding the author in person or finding a scholar or a tutor who's an expert in the
33:02author's ideas.
33:03And like, that was it.
33:04That was not, there was no other way to do it.
33:06But now you can load, you know, a book in and you can start with like, I'm interested
33:10in this, tell me about that.
33:11And then slowly read the book in a nonlinear way through a conversational interface,
33:17kind of dipping in and out of the kind of original passages, which is terrible if
33:20it's a novel or terrible if it's a, you know, straight linear kind of history book
33:24like I sometimes write.
33:25But if it's an advice book or, you know, a book of ideas, you know, there are lots of
33:29books that I think could be explored in that way that just, you know, weren't
33:33possible before.
33:34Does it feel like cheating to do that?
33:36Like for you as an author, you have, you have worked through these books, you have
33:39done the hard work, you have stayed up late at night.
33:42I'm sure you've like woken up in the middle of the night with a book idea.
33:44Does sitting down and asking a computer what your next book should be feel like
33:47cheating?
33:49I think if I were literally like, Hey, what should my next book be?
33:52One, I don't think it would, I mean, right now, in a way, it's easy to answer this
33:57question right now because it wouldn't be good enough.
33:59Like it wouldn't generate, like what it would tell me what my book should be and
34:02now start writing it.
34:02Like it just wouldn't be able to do it.
34:04Okay.
34:05Current trend lines continue.
34:06I was going to say, give it a minute.
34:08Come back to me in two years.
34:09We'll see.
34:10But, but what I have to do is like, I'm not using it that way.
34:13I'm using it like these very like kind of targeted queries.
34:16Like, you know, I was, I actually was using the NASA notebook because I was
34:21interested in like, you know, maybe there's something to be done about the
34:23Apollo 13 fire.
34:25And so then it was kind of like, okay, look, I need to know what I need to read
34:30because it's a million pages, a million words of transcripts.
34:33I actually don't need to read the whole thing.
34:35What I want to read are the sections that are relevant to the Apollo 13 fire.
34:38And so I just wanted a notebook and said, tell me what I should read.
34:41You know, and it was like, you should start here.
34:43You should read there.
34:44And I can click immediately in there and read through that.
34:46And so to me, I'm using it as a way of accelerating the process of discovery
34:55that would just have been painful to do before, but I'm making just as many
34:59unplanned serendipitous discoveries along the way.
35:02I mean, I think the criticism is like, well, if it, the model serves it up to
35:05you so quickly, even if you end up writing the book, you've missed the
35:10surprising thing that you would have never found.
35:13You would have only found it by reading through it in an incredibly linear way.
35:16And I think that that's just, it's just not true.
35:18Like I, it's constantly surfacing things that I hadn't thought of and making
35:23connections that I hadn't thought of.
35:24And so it's helping me understand the material more deeply where it gets
35:30complicated and you and I have talked about this before, the way I think about
35:34it is it's a tool that helps you understand things if you are genuinely
35:39interested in understanding things and having that understanding be in your
35:43brain, it's a, it's a huge net positive, a hundred percent.
35:46Like it, if you go into it with good intentions, it is a, you
35:50know, an amazing tool for thought.
35:51Um, it helps you have richer, more complex ideas, understand material better.
35:56If you are not interested in understanding things, but rather
35:59interested in creating the illusion that you understand things and just want to
36:03bluff your way through life without ever actually understanding anything, but
36:06creating outputs that make it look like you understand things, it eventually
36:10will help you do that as well.
36:12And the question is, the question is like, how often is that
36:16useful as a strategy in the world?
36:18And, and to me, like, if you go into work and you're like, I've got this great
36:21hack, I never read any of the emails from my boss, I just like put it into
36:26the model and then I output anything.
36:27Eventually your boss will like have a conversation with you and you'll say,
36:30Stephen, you don't understand anything.
36:33You have not learned anything and you'll get fired.
36:34Right.
36:35It just doesn't, it's not a good long-term strategy.
36:37The one place where it's tricky is school, where there is potentially a strange
36:42incentive to bluff your way through something.
36:45We think that, you know, we're seeing a lot of adoption of
36:48notebook elements, 18 plus currently.
36:50So it's not usable by high schoolers, but we're seeing a lot of college
36:54and higher ed adoption of it.
36:56Um, it's probably our biggest community so far.
36:59We are very excited about that use case, but we're also like, you know,
37:03we've done a lot of things to, to, you know, ensure that it, you know, it's
37:08hard to use it to bypass understanding.
37:10Like it's, it's constantly steering you back towards the original text.
37:13You're always kind of being pointed back towards that source material.
37:16It's always there in the UI with you.
37:18So, and there are more things we can do on that front, but that's the line that I
37:21think is like, you know, if you, if you go into this with good intentions, I don't
37:26feel concerned about using the tool in this way.
37:29Okay.
37:29I buy that.
37:30And it feels like part of the line there exists in kind of what you let people
37:36make with what is coming out of it.
37:38And I wonder, like, we we've talked a lot about kind of the, the inputs and the
37:42processing and the understanding, and it's, it's not, you're sort of one, you
37:46know, publish this answer on Amazon as a self-published ebook with your name on
37:50it, uh, away from it being, this being a very different conversation, but my
37:53sense is you're, you're.
37:55Deliberately or not, not investing a ton in the, like, write me a research paper
38:01from these six documents kind of use case.
38:03Yeah.
38:04If you look at, look at the notebook guides, like one of them is called study
38:07guide, right?
38:08It creates a study guide.
38:08It doesn't replace your work.
38:09It helps you actually like, you know, it gives you a set of review questions.
38:13It creates a little multiple choice quiz.
38:14It creates like essay suggest questions that creates a glossary of key terms.
38:18So, you know, that's what we're trying to do inside of the product.
38:21We're steering everybody towards that kind of like, help me understand mode
38:26with the idea that, you know, people have different ways of understanding
38:29their different ways that people have, like information sticks with them
38:33through different forms.
38:34And, you know, our job is to kind of do that.
38:36And that's, you know, it turns out as writing about this a little bit this
38:39week that, you know, translation and summarization is something that the
38:44models have always done quite well.
38:46Like from the beginning, like kind of like the first deep learning breakthrough
38:51that really made a difference to consumers was really like translate, Google
38:55translate and others that, you know, that was basically like, take this set of
38:58tokens that are in this language and turn it into the set of tokens in this
39:00language.
39:01And, you know, they're really good at summarizing and in all of its different
39:05forms.
39:06And so we're, you know, we've kind of like embrace that because, you know,
39:10embrace the things that the models do well, it's generally a good strategy.
39:13All right.
39:15We have to take one more break and then we will be back with the rest of my
39:18conversation with Steven Johnson.
39:24This episode is brought to you by AWS.
39:27With the power of AWS generative AI, teams can get relevant, fast answers to
39:33pressing questions and use data to drive real results, power your business and
39:38generate real impact with the most experienced cloud.
39:44We're back.
39:45Okay.
39:45We have waited long enough to talk about audio overviews.
39:48Those wild AI generated podcasts notebook has been working on.
39:52So that is what I asked even about next.
39:55So, and speaking of that, actually, that's, this is a good time to talk about
39:58audio overviews in the sense of different ways that people learn, uh, would never
40:03in a million years have guessed this was going to be the next big feature to come
40:06out of notebook LM.
40:07So rewind the history a little bit here and tell me where did this come from?
40:10Yeah, it's a really interesting story.
40:12So let's just describe it briefly.
40:14Um, it has been all over, uh, the Twitter sphere and other places.
40:19I still call it Twitter.
40:20Sorry.
40:21It's okay.
40:21I do too.
40:21Since we, this is just, uh, a few days ago that it launched.
40:25So basically it's an extension of the notebook guide kind of panel.
40:28And instead of taking your sources that you've uploaded and turning them into a,
40:32uh, a briefing doc, a text based briefing doc or an FAQ, it turns them into a roughly
40:3910 minute long podcast conversation between two AI hosts who discussed the
40:45material, tell stories, share interesting facts, banter in a playful way with
40:51remarkable human-like intonation and, and all this stuff.
40:56And so it's basically the idea is like some people like to learn through
41:01reading a briefing doc.
41:02Some people like to learn and remember better when they hear information
41:06conveyed to an interesting, stimulating, engaging conversation between two people.
41:10And you couldn't create that artificially before.
41:13Now you can.
41:14So we've added it to the suite of, of tools.
41:17That feels like it causes a thousand new, interesting things.
41:21Like on the one hand, it's, it's not the thing that you were describing earlier in
41:26this conversation, which is something that is not full of personality.
41:30It doesn't say I a lot.
41:31It's not, this is not a tool anymore.
41:34This is two slightly weird people hanging out in my ears for 10 minutes.
41:40And they are like it, whatever dial you have that says like, make them say
41:44puns and try and be kind of cringely.
41:47It's like that dial is at 14 out of 10 on most of the ones that I've listened to.
41:52Uh, and I kind of get why, right?
41:54Like if you, if you take the, it should have no personality, it should just be
41:57very straightforward approach that makes sense on showing me a bunch of texts
42:02and apply it to a podcast.
42:03That's a bad podcast, but it does feel kind of incongruous with a lot of the
42:08stuff you've been saying is like core to what you've been trying to do.
42:10Well, I mean, it's such a great question.
42:12So let's, let's just play.
42:13I want you to play this in case people haven't heard it.
42:17Okay.
42:17So get this picture a world where the hottest tech isn't microchips
42:22or the internet, but dynamite.
42:24Yeah.
42:25Dynamite.
42:25And it's not just blowing stuff up.
42:27Think skyscrapers, railroads, massive engineering projects, but also, yeah,
42:32you guessed it, assassination attempts, terrorist plots, all that volatile stuff.
42:36So wild period, right?
42:37Wild is an understatement.
42:39And that's where our deep dive today comes in.
42:40We're cracking open Steven Johnson's, the infernal machine, a book that digs
42:44into this explosive era where dynamite anarchists and the birth of believe it
42:49or not, modern data collection all collided.
42:52Yeah.
42:53Get ready to have your mind blown because the connections he
42:55makes are mind blowing.
42:56Absolutely.
42:57So it's just everything after yeah, dynamite just kills me.
43:01I love it.
43:02You know, one of the things is it it's, um, they are enthusiastic.
43:07And so, uh, people have been uploading their CVs and it's just like, if they
43:12were feeling down about yourself, like, they're like, Oh, John Smith, look at
43:16his make you a promo from, um, so I think when we first talked about what
43:26was then project tailwind, um, for the verge, like, you know, Verizon and I
43:31were talking about how, you know, we, we don't, the model doesn't have a.
43:34Uh, uh, a persona really like it doesn't use the subjective.
43:38I, uh, in general, it's not trying to be your friend.
43:42Uh, it's just trying to get you the information you need.
43:44And that was, that was kind of the house style that we felt was appropriate
43:47for what we were trying to build.
43:49But if you want to have a conversational audio format, there is no way to do that
43:55without their having a sense of like human personas, it just won't work.
43:59No, no one wants to listen to two series, talk to each other.
44:02Right.
44:02That is not, I'm sure there's a lot of that on the internet and it's not good.
44:06The new series is better, but I'm sure, but, but, but the old series, right.
44:09You know, you want to listen to two robots talking to each other.
44:11And when initially it was a, um, another team inside of labs that had built this
44:17prototype that was like, you know, create a document there, there, there are actually
44:20a couple of projects like this at Google.
44:22There's another wonderful one called illuminate that has a slightly more
44:24like kind of scholarly tone to it.
44:26Um, but it was just one of these things where like, it became possible to do this.
44:30And so, you know, some folks were exploring this and it was a very good demo.
44:35Like we've seen from the last few days with audio overviews, like people are
44:40impressed, people want to share it, but it, it didn't kind of have a place to live.
44:46And so kind of right before IO actually, there was this idea of like, well, maybe
44:51we could actually put it into what, you know, would it make sense inside of
44:54notebook and, you know, we were just rolling out the notebook guides.
44:59And when I first heard it, the, the, the podcast stuff was like actually even
45:05bigger, like the host had names and things like that.
45:07And, and so, you know, but I, like it definitely made, you know, I think Verizon
45:13I immediately saw that it was a continuation of the philosophy of the guides that, you
45:18know, we'll take whatever you give us and we will turn it into the format that
45:22makes it easiest for you to understand.
45:23And we know, we know from, you know, the success of podcasts in general, that
45:27people do like to learn that way.
45:28Also, I also like to travel with audio and drive and, you know, walk around
45:31the city listening to it.
45:32So we knew, we knew there were a lot of reasons why audio conversations would
45:34make sense.
45:35Are you starting to have similar conversations about things like video?
45:39I can just imagine like starting to think about what notebook LM looks like when
45:43you're trying to be like Tik Tok native and YouTube native and Instagram reels
45:47native.
45:47It's like, it's just what a wild kind of road to end up heading down in terms of
45:51like, how do we communicate to people where they are, but sort of in this house
45:56style.
45:56This is one of the reasons why it's good that I am like 56 year old gray haired
46:00Steven is not actually like driving things like, cause I wouldn't be terrible at
46:06helping with that, but like, yes, I think that could be part of our future.
46:10One of the things that I think is really interesting about it in the response, it
46:13takes like four or five minutes to sometimes three to five minutes to generate
46:17one of these.
46:18And that's because there is a really complex series of, of kind of, you know,
46:25compute and inference that's going on.
46:27Basically you can think about it kind of in an editorial way, right?
46:30It's kind of drafting a version of it and then it's filling in the details of the
46:35script and then it's revising the script based on the overall goals.
46:38Like there's a, there are multiple cycles of an, basically an edit cycle and that
46:43takes time.
46:43And then crucially there's a stage where my favorite new word is where disfluencies
46:49are added, right?
46:50So that it, it, it deliberately makes it kind of noisier and more like the, the way
46:54that people talk when they're in conversation, where they overlap and they
46:57have partial phrases, they complete each other's thoughts and stuff like that.
47:00All the things that make a traditional transcript hard to read if it's not been
47:03cleaned up.
47:04If you don't have those things in, it's on the wrong side of the uncanny valley, or
47:09it's right in the middle of the uncanny valley, I guess you can debate and, and
47:13obviously like, we're going to, you can imagine we would introduce abilities to
47:17steer in different directions and dial up, dial up, down things or focus on
47:21different topics, but the basic structure of kind of, you know, doing this long edit
47:27cycle and then humanizing.
47:30Yeah.
47:30The thing I called wrong on this one, uh, was that I thought audio overviews would
47:37be one of those things that people were like, oh, well this is silly, right?
47:41It's like every time a new image generator comes out, a bunch of people make wild
47:44stuff with it for 48 hours and then kind of move on and we don't talk about it
47:47anymore until some new image generator comes out.
47:49Uh, this one doesn't seem to have gone like that.
47:53And I think it, it somewhere in between the like, is it remarkable that that's
47:57possible versus is this actually useful?
48:01I think it is more in the, this is actually useful camp for people already
48:05than I expected it to be.
48:06Like, it's definitely remarkable that it is possible.
48:08And you showed a, a sort of sincerely new thing you can do with AI.
48:12And whenever that happens, people get really excited on Twitter, right?
48:15Like that, that's just one way to get people really riled up on Twitter for 12
48:18hours, but I think there was, you, you overlapped the, this actually does
48:22something for people side of things more than I expected when I first heard you
48:27can make an AI podcast out of your notes.
48:29Well, I want to thank you one for changing your mind.
48:32Uh, but two, you remember how jobs used to have that kind of apples, the
48:35intersection of liberal arts and technology or whatever, like what you just
48:39described, the intersection of what is newly possible and what is genuinely
48:43useful, like, that's what we're trying to do that notebook.
48:47And that's kind of what labs is trying to do, right?
48:49Like, it's just like, what can we do?
48:52And what would actually be like, you know, what was unthinkable six months
48:55ago and then what would, would it be?
48:56And, you know, part of it is, you know, I think in some cases people are using it.
49:01Like, I want to study and this is a better way for me to, you know, take
49:05my documents and, and, you know, just learn or learn on the go.
49:08The other thing is like creating things that are effectively podcasts where no
49:14podcast would possibly ever exist in the real world, right?
49:17You know, so people are like, here's this very obscure niche topic that no one,
49:20like the economics of the podcast business will never support a podcast.
49:24I want the podcast on my D and D game that I play with my friends.
49:28Yeah, yeah, yeah, no, totally.
49:29And, and, or it's like, you know, um, we've seen people doing like, again,
49:34thinking about notebook LM in a, in a kind of a team context at work, like,
49:38okay, like the week in review, like give it all your documents.
49:41I always say to people with the, one of the best ways to get to know notebook
49:44is to, if you're a Google, um, docs and slides drive user, just grab, create a
49:51new notebook and grab the last like 20 docs that you've created and just
49:54create a notebook with that and then just ask it questions and it's, it's
49:58ability to kind of grasp like what you're working on and the issues and
50:01issues and things like that, do that and then create an audio overview.
50:05And it's like, this is, this is what Steven was working on for the past week.
50:08You know, here's two happy and enthusiastic people to discuss, like
50:11the things you've been working on, um, and sharing that with your team.
50:15Like, it's pretty nice.
50:17Like, that's a great way to like, look back on the week and think
50:19about what you've been working on.
50:21Um, and, and, you know, you can imagine other ways to explore that as well.
50:26Um, all right.
50:26A couple more questions.
50:27I know I've kept you a long time, but I'll, I'll leave you alone here in a minute.
50:29I can talk to you about this forever.
50:30Yeah.
50:30You're even better than talking to Bart.
50:34Anytime I'm here for you.
50:35Uh, I don't like skiing either.
50:36So, uh, do you think there's a like big mainstream use case for
50:43something like notebook LM?
50:45Obviously there are people and industries and jobs and schools.
50:49Like I can imagine a bunch of people for whom this is useful.
50:52Do you think there is like an everyday, everybody use case for something like this?
50:57Everybody, every day.
50:58I don't know.
50:59But I mean, to me, like, let me put this in slightly different context, right?
51:02Google is a company that is famous for making things that a billion, a
51:05billion people use.
51:06Is there, is there a billion, are there a billion notebook LM users out there?
51:09I think if you define it as there is an AI that is an expert in the information
51:18that is really relevant to you, that you're, you've curated and that you can
51:22engage with in different forms, whether it's chat or listening to an audio
51:27overview, um, and you can, you can basically cultivate your own personal
51:32AI with a lot of information.
51:34Um, maybe it's all your journals, maybe it's, you know, your company's
51:38history, whatever it is, and just by like dragging and dropping files in
51:43there, like it develops this kind of knowledge of everything that has
51:46happened to you or your organization and is able to kind of deliver advice
51:50or help you make decisions or just recall a fact that you're trying to
51:54remember and create these new documents.
51:57Is that a big, like, will maybe a billion people like be doing some
52:02variation of that in five years?
52:04I think that's pretty plausible because there's so many different things you
52:08could do in that kind of context.
52:10Um, you know, it could be like a writer working on a book, but it could be an
52:13executive trying to make a complicated decision, or it could just be somebody
52:17trying to remember things that have happened in their lives and it's like
52:19just keeping a journal only you've got an AI that's helping you do it.
52:23So I, I, I think there's a big market for it.
52:26Okay.
52:27You know what I've been thinking about as you, as you were saying that there's
52:29this app called my mind.
52:30Have you ever, have you ever seen it?
52:31I've seen it a little bit.
52:32Yeah.
52:33It's a great app.
52:33You should, you should play with it.
52:34I think you of all people would enjoy some of the AI stuff they're doing, but.
52:38Uh, I've started using that app and I've gotten in the habit of basically every
52:42time I encounter something I like, I just put it there.
52:45Like if I love podcast or a video or a thing I read or a photo I take or a quote
52:49that I see, I just, I'm just pouring it all in there with no organization, no
52:52nothing, and there's nothing particularly special about that app.
52:55It's just like, it's pretty to look at.
52:57So I like using it.
52:58Uh, and already just the thing where you open it up and it's, it's, it has a bunch
53:03of modes where you can just type in like movies and it'll show you all the movies
53:06and movie adjacent stuff that you've been saving.
53:08And it's like, there's something really powerful about like, here is just a
53:11compendium of stuff that I find interesting.
53:13And there's just enough manual work in that, that I think it's, it's, there's
53:19something to solve in the, like, how do you collect that data from people in a way
53:24that is both sort of easy and frictionless, but also not like gross and privacy
53:28invading, uh, that is like the eternal Google question.
53:31I have high hopes that somebody will figure it out eventually.
53:34That is a bias that I've always had probably to a fault that, you know, like
53:38we don't, we really don't have, we don't even have kind of folders for your
53:42sources or your notes or things like that.
53:44Like basic kind of low level things that we will no doubt add, but we've been
53:48focused on these kind of cooler features.
53:50But part of me is like the beauty of like this, the complex systems of
53:54organization where you're tagging things and putting them into the right folder
53:57and stuff like that are not as necessary now because the AI does that and makes
54:02the connections for you and finds the thing that you're looking for.
54:05So just have one place, make it easy to grab as much stuff from as many different
54:10places and just dump it into a notebook.
54:12Um, and then we'll do the magic and figuring out the connections or the
54:16finding the information you want after that.
54:19Again, you said it correctly before, like the closest thing to this before was
54:23Wikipedia, like I start here in this Wikipedia page about elephants and then
54:27I follow it following these interesting paths, but a dialogue is just an even
54:32better way to do that.
54:33Um, and if, if it's trustworthy and so.
54:38The mode that I really love is, and again, this is only possible with
54:42conversation history and long context is to go on one of those, like exploratory
54:48conversations through an idea.
54:50And then at the end, you're like, okay, this has been great.
54:54Will you format that all as a single document that just has kind of key
54:57takeaways and insights from it so I can just capture it for later?
55:00Cause I don't necessarily want to read the whole conversation again, but I want
55:02to, I want to get the like great pieces from it and boom, it does it and you save
55:07that and that's your kind of record of, of what you just did like that.
55:11I mean, like that's a beautiful way to, to walk through information space.
55:16Um, and full of surprise and, and unexpected turns and discovery.
55:21So that, that feels like a keeper.
55:23What's the next step in getting better at that?
55:25Like one, one thing that it seems to me that we need pretty badly in that way of
55:30thinking, which I love, like, I love the idea of just like, I'm going to sit here
55:33and spend three hours learning about something.
55:35And then at the end, you're going to deliver me like a handy summary of
55:37everything that I've learned.
55:38Amazing.
55:39That's the dream.
55:40It does strike me that one thing we desperately need all of these tools to
55:43get better at to pull that off is multimedia, right?
55:45Like I want to know at the end of it, like, go listen to these, these three
55:48podcasts and this YouTuber, you're going to, you're going to love him.
55:51Go check it out.
55:51And this person on Instagram is doing that.
55:53There's like, there's the internet is such a messy place in the best way now
55:57that it feels like that's one thing.
55:59And I know a lot of folks are working on that.
56:01Uh, are there other things that you look at and you're, you're, you're
56:04working on this stuff too.
56:05Is there a next kind of turn coming?
56:07That's going to make all that stuff even better
56:09right now.
56:10Notebooks ability to help you discover information to put in your notebook is
56:17exactly zero.
56:18Like it has no, it does nothing in that you are on your own.
56:22You need to supply the sources.
56:23Um, we will not help you discover anything, um, across the internet, any, in
56:28any form, it turns out just as a coincidence that Google is really good at
56:35that stuff.
56:36Like I did not know.
56:37I learned when I got that they have this search thing that is apparently a lot of
56:40people use.
56:40So it's obviously like a place where, you know, I would love it.
56:45I want it to be source grounded, but I would love you to stay in notebook to be
56:49able to find things.
56:50And you know, we, we would, we would be really interested in, in exploring that.
56:54I think you'll see that in 2025 for sure.
56:57All right.
56:58That is it for the verge cast today.
56:59Thanks again to Steven for being on the show and thank you as always for
57:02listening.
57:03There's lots more on everything we talked about at the verge.com.
57:06I'll put some links in the show notes to some of the stuff Steven has written
57:09about this too.
57:10His new book is really great.
57:12Go to the verge.com.
57:13Lots of show notes, lots of notebook coverage, all kinds of good stuff.
57:16As always, if you have thoughts, questions, feelings, or other ideas for AI
57:20generated podcasts, you can always email us at verge cast at the verge.com or
57:24call the hotline.
57:25It's six, six verge one, one.
57:27We love hearing from you.
57:28Want to hear all your thoughts.
57:30Send them all.
57:30Can't wait to hear it.
57:32This show is produced by Liam James, Will poor and Eric Gomez.
57:35Verge cast is a verge production and part of the Vox media podcast network.
57:38We'll be back with our regularly scheduled programming on Tuesday and
57:41Friday this week.
57:42We have a lot of headsets in particular to talk about this week.
57:44So get ready.
57:45We'll see you then.
57:46Rock and roll.

Recommended