• last year
Anastasis Germanidis, Co-founder and CTO, Runway Kylan Gibbs, Co-founder and Chief Product Officer, Inworld AI Ely Greenfield, Chief Technology Officer, Digital Media, Adobe Moderator: Ellie Austin, FORTUNE
Transcript
00:00 Thank you to Jeff and our panelists.
00:02 Now, our demo with Natalie earlier
00:04 showed us the potential AI has in content making,
00:08 but what does this mean for the entertainment industry,
00:11 for the music industry, for TV and film?
00:14 Our next three panelists are from companies
00:16 that are using machine learning and large language models
00:19 to reimagine the entertainment industry.
00:22 Runway creates tools that democratize filmmaking,
00:25 enabling artists to spend less time on production
00:28 and more time on creativity.
00:30 It also uses video generation with text prompts
00:33 to create HD videos.
00:35 InWorld AI is helping reshape
00:38 the non-player character dialogue in video games
00:41 with multiple machine learning models.
00:43 The company has recently partnered with Microsoft's Xbox
00:47 to build a new tool set for video game developers.
00:50 And finally, Adobe unveiled its own generative AI tech
00:54 earlier this year called Firefly.
00:56 So to discuss the potential for all of this
01:00 in the entertainment industry,
01:01 please welcome our three panelists,
01:03 Anastasios Giamandis,
01:04 co-founder and chief technology officer at Runway,
01:08 Kylan Gibbs, co-founder and chief product officer
01:11 at InWorld AI,
01:13 and Eli Greenfield,
01:14 chief technology officer of digital media at Adobe.
01:19 (upbeat music)
01:22 (audience applauding)
01:25 - Hello everyone, and welcome to the stage.
01:31 Anastasios, let's start with you.
01:33 I touched on in the introduction
01:36 kind of the broad picture of what Runway does,
01:38 but can you talk us through exactly how you generate video
01:43 and what your business model looks like?
01:45 - Yeah, absolutely.
01:46 So Runway is an applied research company.
01:49 We build models that allow users to,
01:53 essentially to assist in a variety of creative workflows.
01:57 We build a series of creative tools
01:59 that employ generative models
02:02 to allow folks to generate video from scratch
02:04 or to generate images.
02:06 We have,
02:07 our biggest effort over the past year
02:10 has been in video generation more specifically.
02:13 So we text to video, image to video, video to video,
02:17 and allowing people to either transform
02:20 existing video content
02:21 or to generate content from a text prompt
02:23 or from an existing image.
02:24 - And is this for B2B usage or within the business world?
02:27 Who are your clients at the moment?
02:29 - So we target primarily,
02:31 so there's a wide range of creators using Runway.
02:34 We target professional creators specifically.
02:37 We have a lot of creative teams
02:39 from advertising agencies, from media companies,
02:42 kind of using Runway every day, collaborating on content,
02:45 kind of employing kind of generative techniques
02:47 to essentially create things faster.
02:50 - And Kylan, for any non-gamers in the room,
02:52 what are non-player characters
02:54 and why does their development matter?
02:56 - So non-player characters are,
02:58 when you go into a video game
02:59 or even when you go into a Disney theme park
03:01 and you have these powered characters
03:03 that are not a real human driving them,
03:05 these are traditionally called non-player characters
03:07 and they tend to be very dumb.
03:08 So when you go into a video game
03:10 and you're going through this amazing world
03:12 that's had five to eight years of development going into it
03:15 and you go up to a character
03:16 that's supposed to sort of elicit the next quest
03:19 or the next part of dialogue or drive your experience,
03:22 usually they'll give you a one-line answer
03:23 and if you talk to them again,
03:24 they're gonna give you that same one-line answer
03:26 and this kind of is like one of the weak points
03:28 of video games and also I think a lot of media experiences
03:30 and so what we're looking at is how we introduce
03:32 that interactivity into video games
03:35 but also media at large in a place
03:37 where audiences are able to kind of participate
03:39 in the storyline rather than kind of
03:40 just receive it as consumers.
03:42 - So it's about improving the user experience.
03:45 - Yeah, it's all about that immersion.
03:46 Like I think what we see is like one
03:47 is that there's a role-playing element
03:48 to why we engage in media and entertainment
03:51 and it can kind of enhance that ability
03:52 to feel like the world is alive.
03:53 It also increases like the emotional receptivity
03:56 so you actually feel like what you're interacting with
03:57 is alive and meaningful.
03:59 So yeah, we see a lot of like significant increases
04:01 in player engagement and basically everywhere
04:03 where we're integrated now.
04:04 - Now Eli, most of the big tech companies
04:06 have launched AI models in recent months.
04:09 How does Firefly stand out from the crowd?
04:11 - Great question.
04:14 We integrated Firefly into our creative tools
04:16 which are targeted to power the world's creatives
04:19 with every stripe based on sort of
04:22 three key foundational pillars.
04:25 First and foremost, we designed Firefly
04:26 to be safe for creative use.
04:28 So when we first started looking at AI technology
04:30 out in the world and we thought about
04:31 how can we bring this into real everyday
04:33 creative workflows, the biggest problem
04:36 we were hearing from a lot of customers
04:38 in addition to technology was developing
04:40 and quality wasn't there yet was around
04:41 whether it was something they could legally use.
04:44 There's lots of questions still swirling around copyright,
04:47 around ethics, around legality,
04:49 who's gonna get sued for what.
04:50 And so we realized that to put this
04:52 into our customers' hands in ways
04:54 that actually was usable to them,
04:57 we had to train on licensed, qualified content
05:01 that came from either our Adobe stock license
05:03 or from open source content,
05:05 went through a heavy moderation process
05:06 and put it behind our traditional indemnification guarantee
05:09 that we put behind our stock content
05:11 so that enterprises, individuals, large and small
05:14 could all feel confident actually using our AI
05:17 without worrying about the legal implications.
05:19 Beyond that, integration into our tools
05:20 is obviously a big differentiator
05:22 and then a lot of work we're doing on customization
05:24 of the models with enterprise customers.
05:26 - And how do you two think about the material
05:28 you source, where you source the material from?
05:31 I was reminded that earlier this year
05:34 I think it was a group of authors,
05:35 including George R.R. Martin, I think filed a lawsuit
05:38 against OpenAI for alleging that it trained its AI
05:42 on his work without its own consent.
05:45 I wonder what your reaction was to that
05:47 and how you think about sourcing material.
05:50 Anastasios, let's start with you.
05:51 - Yeah, so data is a very important piece
05:54 of building those models,
05:55 especially very high quality data.
05:58 That's why we've been exploring a lot of collaborations
06:02 with data partners to essentially allow us
06:05 to train those models at a larger scale
06:07 at larger data sets.
06:10 We recently announced a partnership with Getty Images.
06:14 Kind of the main focus of that was really,
06:18 we see the value of really high quality data
06:20 and when training those models.
06:22 Having, not everyone has the data
06:26 to train a model from scratch,
06:27 but having, leveraging Getty's kind of,
06:30 a model that's trained entirely on Getty data
06:34 and then customizing for enterprise customers
06:36 is an option that we'll be rolling out
06:38 over the coming months.
06:40 So, very much interested in more and more data partnerships,
06:45 figuring out ways in which we can further
06:47 kind of build high quality models
06:49 on really well curated data sets.
06:52 - Kylan?
06:53 - Yeah, and some of that resonates.
06:54 So, we, when we first started out,
06:55 thought we'd make a product that could be used
06:57 by every creator in the world.
06:58 And what we've realized is we've ended up working
06:59 with a few of the top end creatives in the world.
07:02 So, AAA game studios, the Disneys, the Warner Brothers,
07:04 the Universals, and all of these groups
07:06 have extremely highly protected IP.
07:08 So, last week I was in LA giving a talk
07:11 with Neil Stephenson, who's an advisor of ours,
07:13 and what he actually did was he took the IP from Snow Crash,
07:16 did a project with us where he basically trained a model
07:18 to build a character that was coming from that universe,
07:21 and then we basically fine tuned the entire sort of,
07:23 you know, system end to end to fit to that.
07:25 And so, what we kind of have learned over time
07:27 is like our job is to make a system
07:29 that creatives can bring their own data to
07:31 and quickly fit to their sort of,
07:33 I guess, their parameters, their use case,
07:35 their environment, and their IP,
07:36 and be very protective over that.
07:38 And so, what we've kind of ended up doing
07:39 was a lot of our gaming customers
07:40 is actually building custom models, custom data privacy,
07:43 custom infrastructure, kind of end to end.
07:45 And that's really hard to do at scale,
07:48 but when you're working with like high end creatives
07:50 and high end entertainment companies
07:51 that have really protective IP, it's super important.
07:54 And so, we basically ended up doing this.
07:55 And I think Adobe's done this really well as like
07:57 in terms of actually building
07:58 for what these large enterprises need,
07:59 which is very different from what someone building
08:01 a TikTok video needs.
08:02 So, that's been our approach so far.
08:04 - Now, the Hollywood strikes this year,
08:07 part of them are really centered around the use of AI.
08:09 And actually, the unions ended up
08:10 making some pretty significant wins
08:13 in terms of protections around actors and writers.
08:16 I wonder, Eli, and I'll start with you on this.
08:18 Does anything about the guardrails imposed around AI
08:21 and the outcome of those strikes concern you
08:23 in terms of stifling innovation going forward?
08:26 - Sure, it's a great question.
08:28 So, I am not a lawyer, I'll say up front.
08:30 - No, you're not.
08:31 - But I break the use of AI in production,
08:35 especially of high-end content,
08:37 but any high-end, low-end doesn't matter
08:40 into really two phases, actually three phases, I'd say.
08:43 One is accelerating production.
08:46 So, the manual grunt work of, I have the vision,
08:48 I know I wanna create, I just need to do the work.
08:52 That's something that AI today can add a lot of value to.
08:56 And frankly, that's just the next step in the journey
08:58 that, for example, the film industry has been on
09:00 for decades with virtual sets
09:03 and green screen cinematography.
09:04 It's going more and more to capturing motion intent
09:07 and performances and then doing the production work
09:09 around that.
09:10 As I understand it, I don't think any of the agreements
09:13 limited that, which I think is great,
09:14 because that is a win for everyone.
09:16 I think the other place that AI can get used
09:19 and people are looking at it,
09:20 which is where some of the agreements did touch,
09:22 is around the idea of using AI in development
09:25 and trying to compete with humans for creativity,
09:27 whether that's in script development or performance
09:31 or any of the human creative pieces
09:33 that are brought to the table.
09:35 I think the protections that were put in there are great.
09:38 Frankly, from what I've seen with the work we've done
09:40 in imaging AI and vector AI and some of the other places
09:43 we've been doing over the past year or so,
09:46 I don't think we're at risk right now of the AI
09:50 actually producing the kind of quality creative content
09:54 that a human can.
09:55 So, I think those protections are great.
09:56 I'm fully supportive of them.
09:58 I think we would have found, even without them,
09:59 that people who tried to replace real human performance
10:04 and try and take advantage of an actor's likeness
10:08 without being able to capture the performance,
10:10 it wouldn't have --
10:12 I don't think it would have compared
10:13 with what a human can do anyway.
10:14 So, I think it all actually landed in the right place.
10:18 -Yeah. I think so,
10:19 just engaging with some directors, producers, folks.
10:21 Like, I think that a lot of people
10:23 were very scared, and rightfully so.
10:24 And I think that what happens when a new technology
10:26 comes out any time over my memorable history
10:30 and I think into the past,
10:32 there's a point when you try and use that new technology
10:34 to recreate what people have already been doing,
10:36 but faster and cheaper.
10:37 And that's basically where you see a lot of the losses
10:40 for human labor, and then a lot of the gains
10:42 that end up being made when you find a new form factor
10:44 that was never possible before without the new technology.
10:47 And I think we're at that cusp now,
10:48 and I think as I was engaging with these groups
10:50 and kind of talking through it with them,
10:52 there was a lot of, I think, realization
10:53 that we're not here, at least a lot of us aren't here,
10:56 to recreate what humans have been creating
10:58 for the last decades.
11:00 We're actually here to instantiate something new,
11:02 and I think that's a key factor.
11:03 And I think we should have protections
11:05 against the sort of replication
11:06 of what humans have already been doing
11:08 versus the kind of, I think about it
11:09 as the general pie and the AI eating into that pie
11:12 versus expanding the kind of general size of it.
11:14 And then I think that's generally an overall good
11:16 for everyone.
11:17 That's, I think, how we've been thinking about it,
11:18 and ultimately what we're doing is creating games
11:20 and experiences that certainly no human is powering today.
11:23 And so it's just adding something new into the world,
11:25 and I think that's positive.
11:26 - I'm gonna open it up to questions in a second,
11:28 but Anastasios, I've got one final question for you.
11:30 Now, a lot of the videos you produce at Runway,
11:32 they're fun, they're harmless, they're very useful,
11:34 but like all AI, it could be used by bad actors.
11:38 And I'm thinking particularly
11:39 as we move into an election year,
11:41 do you have any concerns around your technology
11:44 being used for possibly nefarious political means,
11:48 and how are you gonna mitigate against that if it happens?
11:51 - The way we think about con moderation
11:53 is being kind of six months ahead
11:54 of any capability improvements.
11:56 So essentially, we have like the way,
11:59 the task when you kind of moderate content in the platform
12:04 is a multi-modal one.
12:05 You need to moderate both the text
12:07 and the visual output that comes from the platform.
12:10 So we've developed models that allow you to do
12:12 kind of work on both sides,
12:13 and just making sure we have enough kind of guardrails
12:16 to prevent harmful use in both sides.
12:18 So there is harmful use in terms of misinformation,
12:20 which we kind of are actively monitoring
12:22 and kind of building protections around.
12:24 There is other kinds of harmful use
12:25 that we're also have models to detect it in real time
12:28 and essentially enforce it.
12:30 So it's something that we need to continue developing
12:32 along with the models,
12:34 but kind of where like we're paying as much attention
12:38 and have an alignment and safety team
12:40 as we pay to actually improving the models themselves.
12:42 - So if I logged onto the technology today
12:44 and tried to make a video,
12:45 an unflattering video of a political candidate
12:47 that I didn't support,
12:48 what would that trigger from your side?
12:52 - Yeah, so we have the moderation model
12:54 would essentially flag that content.
12:56 Like we have protections against that
12:59 and it's prohibited by the terms of use as well.
13:01 - Okay, does anyone have--
13:02 - I just wanna add a plug here to a project
13:05 that we started a few years ago.
13:06 It's an open project
13:08 called the Content Authenticity Initiative.
13:10 It's driven by open source, open standards.
13:12 We have over a thousand member organizations in it,
13:14 technology members, media members.
13:16 And the goal is to drive these technology standards
13:20 that allow you to add essentially
13:22 what we like to think of as a digital nutrition label
13:25 onto your media.
13:26 So just as you can go to the supermarket today
13:27 and you can pick up any piece of food
13:29 and there is a recognizable label on there
13:31 that you can look at
13:32 and it tells you exactly what's in that content,
13:34 what went into the making of that food.
13:36 The idea of the content authenticity,
13:38 content credential standard
13:40 is to put the same thing on media.
13:41 So that any media coming out from any of our companies
13:45 or any other technology out there that can create content,
13:47 including some cameras,
13:50 I believe some cameras have announced
13:51 that they're actually including this directly
13:53 at the point of capture in their hardware,
13:56 that this actually allows you the consumer
13:58 to be able to look at a piece of content
13:59 and identify who produces, when was it produced,
14:02 how was it produced,
14:03 so you can tell whether this is,
14:04 if it's a political message,
14:06 whether it actually came from the political campaign
14:09 that it purports to be coming from.
14:11 So it's something that we think
14:13 that needs to get broad adoption
14:14 to be able to combat these issues.
14:18 - Any questions for our panelists?
14:20 Yes, there's one at the back.
14:22 Could you say your name and where you're from, please?
14:25 And we're just coming to you with a mic,
14:26 so one second, thank you.
14:28 - I usually don't need one of these, but it'll help.
14:31 Suzanne, Invisible Technologies.
14:34 Does a solution exist today
14:38 to solve the problem of AI alignment
14:41 or being able to keep it from hallucinating
14:44 or produce the quality results?
14:45 Or are you having to sort of piecemeal it together
14:47 from the ecosystem?
14:49 - I can briefly give an answer.
14:51 So there's no out-of-the-box solution,
14:54 but reinforcement learning, using human feedback,
14:57 many common frameworks include
15:00 like preference optimization as a part of that.
15:02 So you can either take two things
15:04 and have a human tell you which one's better
15:05 and then kind of create one more like that,
15:07 or you can tell it using things like thumbs up, thumbs down.
15:09 And hallucination is super difficult still,
15:12 but actually there's a lot of research that shows
15:14 that if you have enough examples,
15:15 it can work like that.
15:16 So this is where I think like there is still
15:18 that collaboration between humans and AI.
15:20 And I think most responsible systems
15:21 will have gone through that layer before,
15:23 but I haven't seen anything that like does it
15:25 without having humans in the loop.
15:26 And I think that's actually probably a good thing
15:29 in terms of like having that human monitoring of it,
15:31 but that's one solution that exists.
15:33 - The way we think about hallucinations is a spectrum.
15:36 So you, like we build creative tools,
15:38 so hallucination can be a feature as much as a bug.
15:41 Like you wanna create new scenarios
15:42 that might not already exist.
15:44 At the same time, you want some degree of like groundedness
15:47 to like, let's say you want, if you generate a video,
15:49 to follow the rules of gravity in most cases.
15:52 So like, I think for the challenge for us
15:55 is how do we kind of allow users the ability
15:58 to kind of to choose where they wanna be on that spectrum.
16:01 Like people wanna make, wanna create like animated features,
16:04 which do not necessarily need to kind of abide
16:07 by kind of any photorealism.
16:09 But in other cases, they might really need that photorealism.
16:11 So that's a problem that we're thinking about a lot
16:14 is like allowing users that control
16:16 of like where they wanna be in that kind of spectrum.
16:19 - We've got time for one more brief question
16:21 if it is out there.
16:22 Yes, over here, please.
16:23 Mic is coming to you.
16:24 Just tell us your name and your company, please.
16:27 - Joel Protich, Axel Springer,
16:29 publisher of Business Insider and Politico.
16:31 I'm just wondering, like,
16:34 because like you're all somehow related
16:35 to the production of media.
16:38 How do you envision the future of media?
16:42 How will media look like in three years from now?
16:45 Because I think we're all asking ourselves
16:47 these days this question.
16:48 - Eli, let's start with you. - A tough one, yeah.
16:50 - Yeah, that is a very big, broad question.
16:54 I mean, I think, you know, first and foremost,
16:57 you know, we've been on this path for years now,
17:00 for decades, right?
17:01 And Adobe's been a part of it,
17:03 about just making the production of video
17:05 more and more democratized,
17:07 more and more open and accessible.
17:08 So I think generative AI is,
17:12 you can look at it as just the next step in that evolution.
17:15 It is a massive step.
17:17 It's an incredible new technology
17:19 that we haven't quite tamed yet
17:21 to be able to get that level of control
17:22 that Anastasios was talking about,
17:23 but I think that's the direction we're on.
17:25 So we will be at a point a few years from now,
17:28 I think, where probably every content type
17:31 that we have now will be able to be created
17:35 in an AI-assisted way.
17:37 I still think it'll all be the creativity
17:38 will be driven by humans,
17:40 but I think we will see humans using AI systems
17:43 to accelerate that production piece
17:46 and remove some of the toil out of it,
17:48 probably out of every step of the media creation
17:51 and publishing process.
17:52 - Either of you want to add something?
17:55 No?
17:55 - Yeah, I can add quickly.
17:56 I think that we've seen generally over trends
17:59 between moving from old school media,
18:02 like newspapers, to film and television,
18:04 to games, which are now like a larger,
18:06 you know, by revenue at least,
18:08 like a larger pie than all those other ones combined.
18:11 I think there's a movement towards interactivity
18:13 as a kind of core part of media,
18:15 and like being able to, I think,
18:17 also include some degree of personalization.
18:19 I don't necessarily believe that everybody
18:20 will have like their own TV show that they watch,
18:22 is because I think one of the reasons
18:23 that people consume media is to have something shared
18:25 among different groups,
18:27 and to actually have like a cultural
18:28 kind of like grounding point.
18:30 But I do think that like how that immediate adapt
18:32 to each person is going to kind of shape
18:34 the future of media.
18:35 So you'll probably have something like shared universes,
18:37 like we already have like, you know,
18:38 Marvel, DC, Lord of the Rings, Harry Potter,
18:40 which make up most of the IP that people consume,
18:42 which will sort of ground it still,
18:44 and will still be owned by IP holders.
18:45 But then how that media is, I think,
18:47 transformed and personalized for each person,
18:50 and the way that they interact with it
18:51 will be something that changes like the future of media.
18:53 So it's like you'll be creating worlds,
18:54 but what actually the media is
18:56 will be up to each consumer and audience.
18:58 - I think an interesting question there
18:59 is what percentage of the media,
19:01 like today, that is true for games, right?
19:03 There's plenty of fantastic media out there
19:04 that is adaptive, and it will get even more so.
19:07 The question is three years from now,
19:09 what percentage of media that people consume
19:11 will be completely adaptive versus very narrative,
19:14 very driven by the creator?
19:17 And then, you know, I think more discussion
19:19 is what percentage do we want it to be?
19:20 You know, different people have different perspectives,
19:22 I think, on what's the right blend between those.
19:24 - We have to leave it there.
19:25 Anastasios, Kyle, and Eli, thank you so much for your time.
19:28 - Thank you. - Thank you.
19:29 (audience applauding)
19:31 (upbeat music)
19:33 [BLANK_AUDIO]

Recommended