Aaron Levie, Co-founder and CEO, Box Moderator: Michal Lev-Ram, FORTUNE
Category
🤖
TechTranscript
00:00 This next founder has been operating in Silicon Valley
00:02 for more than half of his life.
00:04 With such experience comes a deep understanding
00:07 of the power players in the evolving technology landscape.
00:10 At a time when many organizations
00:12 are calling for efficiencies,
00:14 Box has already been there and done that.
00:16 In 2021, Aaron Levy successfully defended himself
00:20 against an investor's attempt
00:21 to oust him from his CEO position.
00:23 And as of last week,
00:25 Box reported an operating margin of 25%.
00:28 Here to talk more about the future of Silicon Valley
00:30 and regulating AI,
00:32 please welcome Box's co-founder and CEO, Aaron Levy.
00:35 (audience applauding)
00:39 Okay, Aaron says it wasn't exactly half of your life.
00:45 I think I was trying to do some math.
00:47 I think it's like one year, maybe less than that.
00:50 You should feel very, very young.
00:52 I do, although I'm feeling very old right now.
00:54 So I apologize if I don't make a lot of eye contact.
00:56 I have some like allergic face issue.
00:59 So we're trying to get to the bottom of it,
01:00 like shampoos and quite a bit.
01:03 So I apologize if I'm not like staring right at everybody,
01:05 but it does make you feel very old.
01:07 Although I've been,
01:09 and also I'd say Chatshubbiti is decent
01:11 at helping me get to the bottom of this.
01:12 - Well, I was gonna say, are you using it?
01:16 - Well, so the doctors are extremely defensive.
01:18 When I come in and I bring Chatshubbiti recommendations,
01:21 I found that it really dramatically changes
01:23 the course of the conversation
01:24 and they kind of want me out of there.
01:26 - Maybe don't lead with that.
01:27 - Yeah, exactly.
01:28 - All right, that would be my suggestion.
01:30 Okay, so we're gonna talk briefly about OpenAI
01:33 and kind of where I should say the OpenAI saga
01:37 that enthralled all of us.
01:39 One of my favorite tweets, posts on X from you
01:44 during that crazy few days is what you wrote
01:48 when everything ended and it was,
01:51 okay, so can we all now agree to never do that again?
01:55 And I'm curious, I just wanna hear,
01:57 you're very prolific when it comes to pontificating
02:00 on a whole slew of topics
02:03 and including what we all witnessed.
02:05 So what are the big takeaways
02:08 for kind of Silicon Valley and tech large?
02:11 - I think, and this is gonna be kind of annoying,
02:13 so we might need to jump to the next topic quickly.
02:15 I actually don't think there's that many takeaways.
02:17 I think literally if you rate,
02:20 if you kind of did just looked at the ratio
02:22 of the amount of drama to the amount of takeaways,
02:26 it's like the ratio is really off.
02:28 (audience laughing)
02:29 And I think the main takeaways
02:33 don't have weird corporate structures.
02:34 Like it's just like, it never ends well.
02:36 And so that's probably the biggest takeaway.
02:39 I think if there's maybe another subtlety in this,
02:42 and I'm sure this is what's been talked about quite a bit
02:44 is we do have some factions emerging
02:46 in Silicon Valley around AI.
02:48 And so I think probably maybe the bigger takeaway
02:51 would be how do we land the plane as an ecosystem
02:55 on this topic of AI safety, AI regulation,
02:58 and these debates.
02:59 And clearly there were real sort of factions
03:02 emerging on the board and within organizations
03:04 more and more on this topic.
03:06 And it's hard to kind of reconcile the two different groups.
03:10 It's not obvious what you would land on
03:13 that would make everybody satisfied
03:14 because you have one group that is sort of increasingly
03:17 believing that AI could end the world.
03:19 And you have another group that sort of believes
03:21 that's not going to happen.
03:22 And in fact, if anything, we need to accelerate innovation
03:26 and progress in AI to get all the benefits of it.
03:29 And you can't square those two things
03:31 because one sort of imagines a nuclear winter,
03:33 I mean, a nuclear bomb going off,
03:36 and the other imagines that would be able to solve
03:39 my face issue if we had just better AI.
03:42 So like, those are like hard to bring together.
03:44 - I love how you're calling more attention to it.
03:46 - I know.
03:47 (laughing)
03:48 - No one would have noticed.
03:50 - Okay, well done.
03:51 So, but talk about this,
03:53 'cause this was gonna be my next question.
03:54 Those two camps, you know, it's not binary, is it?
03:58 How do you bring the nuance into the dialogue?
04:01 - Yeah, no, so the reality is that probably 80% of people
04:04 are somewhere in the middle of that bell curve.
04:05 I mean, as with every political topic,
04:08 and this is a political topic at the end of the day,
04:10 you're mostly gonna hear about the edges.
04:12 And the reality is that the middle is,
04:16 we probably need a lot of progress
04:18 because there's a lot of benefits to AI,
04:20 whether it's education or healthcare or across the economy.
04:23 At the same time, we need to ensure that we have,
04:26 you know, a decent degree of safety and protection,
04:28 and maybe like, you know, some asterisks around this,
04:31 but some degree of regulation.
04:33 I tend to sort of believe that the regulation
04:35 should fall more in the category of regulating
04:38 the use cases of AI across the existing agencies
04:41 that tend to regulate the use cases of any technology,
04:44 as opposed to imagining there's some central,
04:46 you know, centralized monolithic regulator
04:48 that perfectly predicts all the things
04:50 that we should, you know, prevent from AI doing,
04:52 AI from doing.
04:54 And so that would mean that the FDA should regulate AI
04:56 in its, you know, area of expertise in the FAA and so on.
05:00 And that's probably the best outcome
05:01 because, you know, if you have this centralized
05:05 regulation of a model, one person might view an output
05:09 of that model as being dangerous.
05:10 Another might view that as actually
05:13 being extremely productive.
05:14 And the law really needs to be more about
05:16 how do you put that output into practice
05:19 and into, you know, some degree of production.
05:21 That's gonna be hard to essentially, you know,
05:23 sort of manage.
05:24 - Okay.
05:25 Please raise your hand if you have a question for Aaron,
05:27 and we'll get to you a little bit later.
05:30 On Box more specifically,
05:32 you've been trying to make knowledge workers
05:36 more effective for many years now, as we heard.
05:39 Half your life, almost half your life.
05:42 - That actually is true.
05:42 It was the Silicon Valley thing.
05:44 We moved to the Valley about a year into the company.
05:46 So, but half my life has been, you know,
05:49 working with files, very weird.
05:51 - So when I met you at Fry's Electronics in Palo Alto,
05:54 that was a year in.
05:55 - A couple years in, yeah, exactly.
05:55 - Okay, all right.
05:57 RIP Fry's, by the way.
05:59 But, so my question is,
06:03 with generative AI, what's different?
06:07 What's changed over the last year in the product
06:09 and in how you're engaging with customers?
06:10 And how you're working internally.
06:11 - Yeah, so for us, I mean, we kind of landed on,
06:15 you know, what I think of as one of the best opportunities
06:17 in AI, because we work in the land of unstructured data
06:21 and unstructured, you know, documents.
06:22 And so what's inside of a document?
06:24 It's lots of text.
06:24 What's a large language model really good at?
06:27 Understanding and working with text.
06:28 So if you imagine like, where is there a large corpus
06:32 of information in any kind of enterprise
06:35 that's in your documents?
06:36 And that could be your legal documents,
06:38 your medical information, equity research, contracts,
06:42 just anything that you have that is full of text.
06:45 AI models are increasingly good at summarizing,
06:48 synthesizing, extracting data from searching across.
06:51 And so that was a really kind of profound opportunity
06:53 for us.
06:54 And we tried a bunch of things with AI over the past,
06:56 you know, seven plus years.
06:57 And what we found is you needed very narrow models
07:00 for each use case, because that was the only way
07:02 that we could get the right level of quality
07:04 and efficacy from AI.
07:06 And so the real breakthrough recently was GPT 3.5,
07:09 and then GPT 4, maybe now Gemini,
07:12 where you have these models that can work
07:14 across a large variety of contexts.
07:15 That's what makes it actually more practical
07:17 for, you know, kind of any given random enterprise
07:19 to be able to use this technology.
07:21 Because previously when we tried to bring AI
07:23 to your documents, we would have to say,
07:25 okay, so you're a law firm that's based in the UK
07:27 working on these kinds of contracts.
07:29 Here's a specialized model just for you.
07:31 That was the only way to really kind of solve these problems.
07:33 Now you can do it in a generalized way,
07:34 which means that you actually can scale this as a product.
07:38 And so we're in the very early stages with BoxAI.
07:40 It kind of does exactly what you'd imagine we would do.
07:42 It takes all of your data,
07:43 plugs in an abstraction layer that connects to OpenAI
07:48 or Google and other partners over time,
07:50 and then lets you work across your data
07:53 with those AI models.
07:54 -And you said maybe now Gemini.
07:56 A lot of conversations about vendor lock-in
07:58 and fear of that.
07:59 How are you guys approaching that question?
08:01 -Yeah, I mean, I think it's prudent probably for everybody
08:03 to have as much optionality
08:06 and a future-proof strategy with your AI.
08:09 Certainly if you're a smaller startup,
08:11 speed to market and thus sort of probably picking a vendor
08:15 right now matters a lot.
08:16 That was actually why I think the OpenAI moment
08:19 was so dramatic.
08:21 It was not your classic sort of leadership struggle or dynamic.
08:25 This was a technology that's been embedded
08:26 in tens of thousands of products at this point,
08:28 and so it really had a full ecosystem.
08:31 It was consequential across the ecosystem,
08:34 which is why you saw so much of the ecosystem
08:37 rallying around the issue.
08:39 But for us, we want to work with multiple vendors
08:41 because we know there's going to be different specializations,
08:43 different costs, different approaches to the technology,
08:46 and so we'll plug in any vendor
08:48 that our customers want to work with.
08:50 -Okay. We have a question back there.
08:53 -Hi. Amy Lemire from WXR Fund.
08:55 So one of our portfolio companies does AI for rashes.
08:59 -Oh, my God. Thank you. What is it called?
09:00 -Piction Health. -Okay.
09:02 -I was going to say, any questions, just not about his face.
09:04 -I'll take any recommendations.
09:06 If you know an AI app for face rashes, please let me know.
09:09 -Yeah, so they do. -Okay.
09:11 -But related to that, at what point --
09:13 How do you assess if you can trust that data?
09:17 -Yeah, going to the doctor afterwards
09:19 with the recommendation.
09:20 So, I mean, actually, I'm not being facetious.
09:24 I actually think this is why I'm extremely optimistic
09:26 about humans and AI,
09:29 and I think this is what gets missed.
09:31 And there's plenty of good research and literature on this,
09:34 but in my own personal life
09:36 and I think probably most people's personal lives,
09:38 AI is a rapid accelerant to getting information,
09:41 getting insights and expertise,
09:44 and then you almost instantaneously
09:45 go and ping a person who you know as the expert in that domain
09:49 and you say, "Hey, does this sound right?"
09:51 So this idea that AI replaces lawyers
09:54 is just completely impossible.
09:55 AI will accelerate our ability to get advice
09:58 that we then go ask a lawyer about.
10:00 And so -- because at the end of the day,
10:03 we're going to need liability.
10:04 There's going to be liability in the system.
10:06 You want a person to be able to apply
10:08 some degree of human judgment on your specific circumstance,
10:11 and the amount of context you have to have
10:14 about that individual's circumstances is, at this point,
10:17 you know, probably three to five orders of magnitude
10:19 more than what the AI model is getting
10:22 when you just do a quick chat back and forth.
10:24 And so all of that context, all of the situational awareness,
10:27 all of the understanding of, you know,
10:29 just like what is germane to this particular person's issue
10:33 because of the, you know, jurisdiction
10:34 or the place that they're at,
10:36 like, the AI model is never going to really --
10:39 I mean, anytime soon, it's not going to have that level of depth.
10:41 And so, you know, whether it's coding or recommendation systems
10:47 or, you know, healthcare or education,
10:50 I think it actually increases the ultimate sort of usage
10:54 of a lot of the surrounding products
10:56 and services of those industries.
10:57 So I'm mostly, you know,
10:59 extremely optimistic across the board.
11:01 Even things like designers, you know, my quick belief
11:06 would be that actually the design consumption
11:08 and, thus, design services goes up as a result of AI
11:12 because what it does is it lowers the bar
11:14 to experimentation dramatically.
11:16 As you experiment and do try new things,
11:18 you instantly say, "Oh, actually, like,
11:20 let's turn that into a real thing."
11:21 And then all of a sudden, professionals, you know,
11:23 you go out to some kind of professional
11:25 for whatever that product or service is.
11:27 So whether it's, you know, again, legal reviews,
11:30 making a web page, healthcare advice, tutoring,
11:32 I think this is good across the board for the economy.
11:36 -Other questions for Aaron? Raise your hand, please.
11:39 Okay. We've got one back there.
11:42 -Thank you. Hi. Jennifer Fonstad, Al Capla.
11:49 Good to see you. -Hi.
11:50 -My question's about privacy
11:53 and around the idea of using proprietary data.
11:56 So a lot of folks are thinking about --
11:59 a lot of companies are thinking about
12:01 how to bring in their proprietary data
12:03 in a way where they're not risking that --
12:06 the privacy issues around their customer data
12:09 and then also thinking about proprietary --
12:11 I'm sorry, privacy from an application
12:14 in the cloud such as yours.
12:15 So I'm wondering how you're thinking about those.
12:16 -Yeah. -And in particular,
12:18 if there are any vendors that you are finding
12:20 that are useful in those areas.
12:21 -Cool. Yeah, good. Thanks.
12:23 So, I mean, like, I don't want to turn this
12:26 into an advertisement, but, unfortunately, like --
12:28 -I will. -But I will.
12:30 And -- but, like, your question is exactly dead-on
12:34 to why we think there's a big opportunity,
12:36 is, like, probably the most boring part of our service
12:43 that, like, literally, like, I shouldn't even be allowed
12:45 to mention on a stage because it's so boring,
12:47 access controls and permissions is going to become, like,
12:51 a 10 times larger problem with AI.
12:53 Because if you just said, "Hey, you know, here's this,
12:55 you know, AI assistant that works
12:56 across your entire enterprise
12:57 and accesses everything across your organization,"
12:59 and somebody asks a question of that,
13:01 the likelihood of that, you know, sharing information
13:03 that the user actually is not supposed to know about
13:06 because they're not actually supposed to have
13:08 the source information accessible to them
13:10 is very high right now.
13:11 Because we've all dealt with, you know,
13:13 information security through obscurity, essentially,
13:16 where we sort of obfuscate the access to the data
13:19 as the way of securing it.
13:20 But all of a sudden, AI wants to go look at everything.
13:23 It wants to consume everything.
13:24 And so that's going to be this big problem,
13:26 which is, you know, when Bob asks for Sally's HR,
13:31 you know, data or salary,
13:33 is Bob supposed to get an answer back or not?
13:35 And a lot of our, you know, typical enterprise systems
13:38 are not prepared for AI going out
13:40 and trying to ask that question and getting the answer.
13:43 So this is where things like, again,
13:44 like just like understanding who has access to what data,
13:47 the controls around it, the privacy around it
13:49 matters a ton.
13:49 Obviously, that's, you know, the business we're in,
13:51 but there's going to be thousands of products that do that.
13:53 On the training side, this is where I think,
13:57 I'm concerned a little bit that sort of training
14:00 your own data set right now is not a complete panacea
14:03 because, you know, a lot of companies, I think,
14:06 initially want to go and train a model on their data,
14:09 but then instantly you ask the question of like,
14:11 well, what's your, like, what data?
14:13 Like, and then you have to get pretty narrow pretty quickly
14:17 because anything that is sort of generalized
14:19 across your business, probably not everybody in the company
14:22 can get an answer back and that would be appropriate.
14:25 And so you can't just like train your company AI model
14:28 on HR information or on product information
14:32 because can anybody in the organization
14:34 ask a question of that AI model and get an answer back?
14:36 Like, would that be appropriate?
14:38 And, you know, all of a sudden
14:39 you'd have corporate secrets getting revealed.
14:42 So things like kind of, you know, data protection
14:45 in the AI model itself is like a complete,
14:48 I mean, there's a lot of startups
14:49 that I think are working on that.
14:51 It seems to be like, you know, a relatively
14:53 either difficult or intractable problem at the moment
14:55 unless you put an application layer, you know,
14:57 between the user and the AI model
14:59 that is sort of watching for all that.
15:01 But I would say just super early days,
15:03 somebody will make a ton of money, you know,
15:05 solving that problem and, but these are the things
15:07 that I think we still have to explore.
15:09 - Aaron, thank you so much.
15:10 We're gonna let you go and look up that app, okay?
15:12 - Okay, thank you, okay.
15:13 All right, see you.
15:13 - Thank you, Aaron.
15:14 Appreciate you being here.
15:15 (audience applauding)
15:18 Next up, we are going to take a 30 minute break.
15:21 We'll meet back here at 4.15 and when we come back,
15:24 we're going to kick off with a demo from hour one
15:27 on how to create text into generative AI video content.
15:30 See you back here shortly.
15:31 Thank you.
15:32 ♪ It's time to ♪