Fortune Global Forum 2023: Beyond The Horizon

  • last year
Peng Xiao, Chief Executive Officer, G42 In conversation with: Matt Heimer, FORTUNE
Transcript
00:00 So there's a lot of buzz here in Abu Dhabi and around the world right now
00:03 about what's being called the world's largest artificial intelligence supercomputer.
00:08 The system was launched earlier this year by the Abu Dhabi-born tech company G42,
00:14 and it is said to offer a new approach to AI computing that's going to significantly reduce
00:19 the time it takes to train an AI model. In addition to the supercomputer, G42 has also
00:25 unveiled JAIS, which is an open source Arabic language learning model, considered to be the
00:30 most advanced in the world. At the same time, events this week have shown how very geopolitically
00:36 sensitive this kind of advanced technological work can be. To discuss all this, we're very
00:41 pleased and fortunate to have G42's CEO, Peng Chao, joining us today to talk about his company's
00:47 innovations, its mission to be a global leader in artificial intelligence, and the challenges
00:52 and responsibilities that come with that mission. So please join me in welcoming Peng to the Fortune
00:58 Global Forum.
00:58 Please have a seat. So Peng, for people in the audience who aren't already familiar,
01:12 tell us more about G42 and what it does and what you're hoping to accomplish.
01:16 Well, I've got the bag right here.
01:18 Oh yeah, and thank you for bringing me.
01:19 The secret of G42 is right inside this bag.
01:20 Present.
01:20 Yeah. Anyway, so yeah, tell us more about G42 and what your goals are and what you're
01:25 trying to accomplish.
01:26 Thank you, Matt, first and foremost, for this opportunity to be here with you.
01:30 My pleasure. Thank you.
01:31 As we discussed backstage, there are so many great and important things happening in the
01:36 field of AI. I cherish this opportunity to discuss them with you. G42 is a company that
01:42 started back in 2017, 2018 time frame. As our friend Sam Altman, who was here recently,
01:50 said, Abu Dhabi was on AI before AI was cool. So that was part of us transporting this brand
02:00 new technology, becoming a global game changer. So we embraced AI as a core technology development
02:08 objective of our company and built our business around it since then. And the company has
02:13 developed from the original research mission on foundational AI work into many different
02:20 disciplines across different industries, from healthcare to the space program. And
02:24 here we are today.
02:25 What are some of the specific challenges within those categories that G42 is applying its
02:31 technology to?
02:33 There are many specific industry-specific challenges, but there are some fundamental
02:39 challenges to AI development in general. Specifically, I'll say three things. Number
02:47 one, you need a great talent. You need great scientists. And that's really the brain of
02:52 creating AI. And second, you need a lot of food to feed into the growth of AI. And the
02:58 food is, of course, the data. And the third element is you need also a tremendous amount
03:04 of muscle to process all this science and data that is compute.
03:09 So we've been tackling all three areas of challenge for many years now, attracting talent
03:17 here and working with talent globally as partners, and working on safeguarding and also
03:25 utilizing large data sets, and finally, working on supercomputers. You mentioned this
03:30 earlier in opening talk to bring more muscle to this effort.
03:36 Yeah, thanks for bringing up the supercomputer. I think generally when people talk about AI,
03:40 they talk about the work being done, the models being trained and applied in the cloud in
03:46 a sort of distributed way. The supercomputer seems like an opposite, sort of more
03:50 concentrated, physically concentrated model. What are the reasons for going in that
03:55 direction? What advantages might it confer to what G42 is doing?
03:58 Great question. Thank you for this question. This is the reason why, by the way, we
04:03 undertook our own supercomputer initiative with a partner service in California. You're
04:08 right. Traditional hyperscalers use cloud infrastructure to help you to build your models.
04:13 It's a good way of developing model because they have so much compute, but it is also
04:20 challenging in the sense that we call it a system engineering effort because it is
04:24 spread across, distributed. You have to build your model in a smart way to take advantage of
04:29 this vast array of resources. In comparison, if I can show you here.
04:35 This is time for show and tell, and it's good.
04:37 I love this. It's very tangible. This is a single chip, the size of a small pizza,
04:44 that our partner service in California developed. On this one chip, we have 2.6 trillion
04:52 transistors. You can take a look. Sure. The advantage of this, of having it all in
04:56 one place, is does this speed up the training time for the AI model?
05:00 100%. Literally 100%?
05:02 Well, not literally. Sorry. Actually, it can be more than 100%, much more.
05:07 Basically, what happens here is that in a traditional environment of many smaller chips
05:15 across different systems, there's a huge cost of moving data on off the chip.
05:20 That is where things get slowed down. That's a bottleneck.
05:24 If you can load all the data you want to train on a single chip without having to,
05:30 on and off ramp all the time, you save a tremendous amount of time.
05:35 That's why we believe in this supercomputer training model.
05:39 The one we launched, you referred to just now, called Condor Galaxy.
05:44 Condor Galaxy. Yes.
05:45 Phase 1 of Condor Galaxy was launched in California back in, I think, July.
05:50 It was already 4 exaflops of capacity. 4 exaflops is, in layman's terms?
05:56 One single exaflop is 10 to the power of 18 computational cycles per second.
06:05 It was very fast. It's a very large number.
06:08 We had 4 of those. We just recently announced in a high-performance
06:14 computing show in the US that we're doubling that now to 8 exaflops.
06:19 The whole idea here is to speed up the training of large models with potentially trillions of
06:26 parameters, not just billions. This is why we pursue this architecture.
06:31 Cerberus is an important partner in making your hardware.
06:34 Your company has forged a lot of relationships globally.
06:37 What role do those partnerships and those collaborations can play in your strategy as
06:42 you're assembling data, as you're figuring out problems to attack?
06:46 What are some of the key partners that you're working with?
06:49 Yes, Cerberus for sure is one of them on the hardware side.
06:52 By the way, they also co-developed models with us too.
06:54 I heard you mention the GIS model, the largest and most comprehensive,
06:58 most accurate Arabic model in the world was actually built in California
07:02 between our researchers in Abu Dhabi and researchers in California with Cerberus.
07:05 In a partner, we look for two things. Number one,
07:11 superior capabilities than what we have, they bring to the table.
07:15 Second, they have to share our value system. We have to have the common belief.
07:20 For each one of the partners we have, it doesn't matter if it's Cerberus
07:24 or with our friends at OpenAI, or with global giants like Microsoft, we fund both.
07:32 We believe there is an urgency in developing not just AI,
07:38 generative AI, all the way to, in fact, general AI, AGI.
07:43 We share that mission, we share that urgency.
07:46 And second, they bring to the table capabilities we actually don't have here.
07:51 And even if we want to develop, it'll take too long.
07:54 So, for example, with Microsoft, it's skill.
07:58 They have skill. AI is all about skill.
08:00 You cannot develop AI without that large, scalable platform.
08:05 So we choose Microsoft as a key partner in that area.
08:08 OpenAI, of course, is a key research and development partner.
08:13 We actually recently announced our partnership with them.
08:16 We were joining the roadmap together from our research ecosystem here in Abu Dhabi,
08:21 including our MBC, University of Artificial Intelligence,
08:24 together with Sam's team, to ensure we don't duplicate our conflict.
08:28 We go down the same path together, hopefully moving faster.
08:32 And again, in the case of Cerberus, without the hardware, we're going to go nowhere.
08:38 - Sure, sure. - That's the value they bring to the table.
08:41 - Your partnerships, I'm going to kind of address the elephant in the room now.
08:45 Your partnerships this week have attracted some negative publicity.
08:49 Earlier this week, the New York Times reported that U.S. intelligence agencies,
08:53 including the CIA, had been raising concerns about G42's ties to Chinese companies.
08:58 Summarizing the report, the story essentially said U.S. officials feared that G42 could be
09:04 a conduit by which advanced American technology is siphoned to Chinese companies.
09:09 At the same time, the same article said, and again, I'm quoting,
09:14 "Companies in almost every sector around the globe have ties to China."
09:19 So you do have some Chinese relationships and Chinese partnerships.
09:22 They're attracting some concern.
09:23 There's also this geopolitical tension between two major nations that many people in this room,
09:29 many business leaders have to navigate.
09:31 How are you navigating it?
09:33 How are you dealing with this new attention?
09:36 How are you confronting the reality of this difficult environment?
09:41 - Thank you for the question. - Thank you for the answer.
09:45 - First, let me acknowledge my own ignorance because I didn't actually finish reading that article.
09:50 - Okay.
09:51 - Someone sent it to me and something more important happened.
09:53 I just didn't finish reading it.
09:55 But I did get a gist of what New York Times was trying to say.
10:00 As you mentioned already, Matt, this is a very strategic global topic, AI.
10:07 And it has far-reaching consequences for our society here in Abu Dhabi, UAE.
10:15 To geopolitical situations around us, all the way to the entire humanity.
10:20 So it's correct that we all need to pay attention to the proper development of AI
10:28 and ultimately what we use AI for.
10:30 To that end, we have been a leading practitioner in this area.
10:36 As I mentioned, you saw our list of top partners we're working with.
10:42 The safety of AI has been always a top priority for us.
10:45 We spoke earlier backstage the importance of safeguarding your data,
10:50 the importance of proper data governance.
10:52 I think that domain has been more or less addressed pretty well
10:59 from the previous work we did in big data, data analytics,
11:04 from the days when Google launched its first search engine.
11:07 - So when you say it's been addressed pretty well,
11:09 are you saying that there are frameworks in place?
11:11 - Yes.
11:11 - That require you to safeguard data?
11:14 - Data classification, data privacy, data residency,
11:18 all this, I think the age of hyperscalers,
11:20 they are, I believe, fairly well addressed.
11:24 - And when you say well addressed, do you mean there's a wall built enough
11:27 that is firm enough to keep data from being in the wrong hands?
11:30 - There are good policies and procedures in place.
11:33 It's up to the practitioners to implement them in the right way.
11:36 If they are implemented, like in our case, properly, there's very little risk.
11:41 Of data privacy concerns.
11:44 However, I think you're referring to, Matt, probably going beyond data,
11:48 the safety and security of intelligence.
11:51 When data is actually put into a model, when the model has been developed,
11:57 when the model begins to generate intelligence,
11:59 this is where we are in a brand new territory.
12:03 And I fully agree, because it's brand new, because it's so nascent,
12:08 we have to, as practitioners, work together hand in hand to safeguard, number one,
12:13 the proper generation of intelligence.
12:15 How do we know it's the right form of intelligence?
12:18 How do we know it's really safe, giving us the right answers?
12:20 And number two, how the intelligence, becoming more and more powerful,
12:25 will not fall into the wrong hands to be used for wrong purposes.
12:28 - Does this mean you're not only responsible for safeguarding the data
12:32 in terms of where it is held, but you also have some responsibility
12:37 over how your partners, or I should say, maybe over how your customers use it.
12:41 And can you control that?
12:42 - Absolutely.
12:43 I think I'm acknowledging absolutely that's the right question.
12:48 We need to address it.
12:49 With our partners, Matt, for sure, we have mutual agreement on how data is handled.
12:55 For example, when we ship our Arabic data all the way to California
12:59 to be trained on a computer supercomputer,
13:02 we make sure there's no leakage of sensitive information
13:05 from our data set leaked in the US.
13:08 The same thing, if the US counterparts ever share any data with us,
13:14 we are accountable to ensuring that that data stays safe and sound here.
13:18 And there's auditing procedures,
13:20 there's remote monitoring procedures in place for that.
13:23 Now, the challenge is the moment when data or models get into your clients,
13:28 your customer hands, where you don't have that type of control anymore.
13:32 Of course, your customers will sign certain agreements,
13:35 but ultimately how they use it,
13:36 you as a vendor cannot be fully accountable for it.
13:42 This is why we need an overall ecosystem framework
13:46 from government all the way to consumers.
13:48 - And for that, you need, I assume, collaboration.
13:51 - You need a lot of collaboration and transparency.
13:54 That's critical.
13:56 - There are two arenas in which G42 works, if I understand correctly,
14:00 where there's particular concern around how data is used.
14:03 One is in medical research and essentially drug development
14:08 where individually traceable genetic data could get out into the ecosystem.
14:12 The other is surveillance.
14:14 There's, I believe, your company Presight AI,
14:17 which does security and surveillance technology that police forces use.
14:22 How do you make sure the same ethical standards apply in those even more sensitive realms
14:28 and what additional burdens come with that?
14:32 And are you fairly confident, again, that the right safeguards are in place?
14:36 - Yeah, you just asked, I think, two questions here.
14:39 I'll try to answer them both.
14:40 In the case of Presight, one of our companies in the G42 portfolio
14:44 focus on big data applications.
14:46 We actually provide to our clients the big data platform.
14:52 And the capabilities on which our client can develop their own applications.
14:56 If our client's big data application is commercial in nature,
15:01 it's non-sensitive, we'll build it for them
15:03 following their guidance and their requirements.
15:05 But in cases where the client actually are building sensitive applications,
15:09 continue extremely sensitive data sets and use cases,
15:12 we train them how to use our tools to build their own solutions,
15:16 which is, in some cases, what Presight is doing with some of our government clients here.
15:22 In the case of genomic data, you know there is a Supreme Council
15:27 of Genomics Data Management here in the country.
15:30 So there are multiple layers from regulation down to IT policy
15:36 to safeguard genomics and other medical data sets in the country.
15:40 In fact, I can tell you right now,
15:42 I personally have never been able to get my hands on any of the data sets
15:46 in the WGIS database we have in the country.
15:49 And we don't need to.
15:51 That's not our job to do so.
15:53 The data belong to the individuals.
15:55 Our job ultimately is to help our nation,
15:58 help our partners to bring inside all of that data,
16:02 then to go back and provide, hopefully, precision medicine support to the population.
16:08 To do that, you actually don't have to disclose
16:12 or even, in many cases, access some of the sensitive sets.
16:16 You referred to, if I heard you correctly, helping our nation.
16:20 You're a national now, a citizen of the UAE.
16:23 Yes, I am.
16:24 What drove that decision?
16:25 Well, the work I was doing in the UAE is very important to the national agenda.
16:33 The national agenda being to decarbonize, to diversify the economy.
16:37 Absolutely. Absolutely.
16:38 When I came here almost nine years ago,
16:40 I was answering a call to action from this nation.
16:45 That is, they want to go from an extractive economy of relying on oil
16:51 and going to a creative knowledge economy.
16:55 That's why I came to help to build this big data platform,
16:58 which eventually became this AI group of ours.
17:02 So it became clear to me, this call to action became my calling.
17:07 This became home.
17:08 And because I'm doing such important, cutting-edge work in this country,
17:14 I was very happy to join the citizenship of UAE.
17:17 And the regulation here is such that you can only have one citizenship.
17:21 I see.
17:22 So here I am.
17:23 Yes.
17:24 Well, you answered their call.
17:26 We're really grateful that you answered ours.
17:27 Thank you for coming and talking to the Global Forum audience.
17:30 Of course.
17:31 Good luck with the work you're doing.
17:32 Thank you so much.
17:32 Thank you.
17:34 Thank you.

Recommended