• last year
Billionaire investors of the internet era are now locked in a war of words and influenced to determine whether AI's future will be one of concentrated safety or unfettered advancement. The stakes couldn't be higher.

In May 2023, OpenAI CEO Sam Altman appeared on Capitol Hill for a Senate subcommittee meeting on AI. The substance of his message: Regulate us. For his opponents, this was the mask-­off moment they’d been wait­ing for. Three months earlier, Musk, who had cofounded and bankrolled OpenAI when it was still an open­ source nonprofit, had taken to X to decry OpenAI’s recent multibillion­ dollar capi­tal infusion from Microsoft. From its nonprofit roots, OpenAI had evolved into a “closed­ source, maximum­ profit company effectively controlled by Microsoft,” Musk said.

For Vinod Khosla and Reid Hoffman—who met with Alt­man together at least once to talk strategy but otherwise move in separate circles—OpenAI’s willingness to compromise is how to get things done. Whether Hoffman is talking to Biden, Pope Francis or U.S. Commerce Secretary Gina Raimondo, a frequent collaborator in recent months, their questions are similar: How will constituents’ lives change because of AI? What about their jobs? When should they be excited for benefits, or cautious about risks? “You have to show you understand what their primary game is, and that they can trust you to figure it out,” Hoffman says. “If your approach to government is to say ‘get out of my way,’ then you’re not help­ing with their game.”

Read the full story on Forbes: https://www.forbes.com/sites/alexkonrad/2024/06/04/inside-silicon-valley-influence-battle-for-ai-future/?sh=44d2570c2dc4

0:00 Introduction
0:59 Where Is Silicon Valley On AI
5:03 Reid Hoffman And The State Of Open AI
7:44 Where Does Social Media And AI Regulation Stand?
9:55 Vinod Khosla's Take On Open AI And The State Of The Industry
13:30 Will Regulations Aid Our Society From AI Going To Far?

Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.

Category

🤖
Tech
Transcript
00:00Billionaire investors of the internet era are now locked in a war of words and influence
00:05to determine whether AI's future will be one of concentrated safety or unfettered advancement.
00:10The stakes couldn't be higher.
00:17This debate is about how those tools will be applied moving forward in society.
00:23You have some of the most powerful people in the world, literal world leaders, the Pope
00:27of all people, paying attention to these folks coming out of Silicon Valley who have very
00:32different views.
00:33And so while these conversations are sometimes happening behind closed doors and not making
00:37it to the general public, I think it's very important for our audience to know that this
00:41debate is happening and that the direction of AI development is being fiercely fought
00:47ideologically right now.
00:49I see AI as a really, really powerful tool that will enable a set of things that otherwise
00:56we wouldn't think of it as plausible.
00:59Vinod Khosla is a legend in Silicon Valley and the wider tech community.
01:03He co-founded one of the most important computing companies of the 1980s into the 90s called
01:08Sun Microsystems.
01:09Since then, he's been an elder statesman in the tech community, and he was the first institutional
01:14investor in what was then a small AI company known as OpenAI.
01:19He sees AI as a great economic equalizer that can sort of raise up the developing world
01:24while also improving the health and daily lives of people in places like the U.S. as
01:29well.
01:30We caught up with Vinod Khosla at Khosla Ventures office in Menlo Park to hear more.
01:34It's possible now to build an AI doctor that would be free doctors like a primary care
01:40physician for everybody.
01:42The same with teachers.
01:44We could have a free AI tutor for every child on the planet for the cost of computing, which
01:51as you know, is very cheap and declining.
01:54That's true of generally most expertise over the next 10, 20 years will be essentially
01:59free, which is a fundamental restructuring of how we value expertise in society.
02:05Now whether as a society we allow this to happen, that's up to each country by itself.
02:11I don't think that should be a concern, but it is a concern that people have when it comes
02:16to AI because it's so powerful.
02:19How do we as human beings play this game to win, right?
02:23And by winning it means we make our societies better.
02:26We take the things that we value as part of the reason we value middle class society like
02:30education and wisdom and compassion and how do we become better in all these characteristics.
02:35Vinod Khosla and Reid Hoffman are both key early investors in open AI in their own way
02:40and they're both influential with sort of different spheres of the AI landscape.
02:44They agree on a lot of subjects, but not all.
02:50Reid Hoffman is an early member of the PayPal mafia, the legendary group of Silicon Valley
02:54tech leaders including folks like Elon Musk, Peter Thiel and Max Levchin who were members
03:00of PayPal when it was a startup in the early days of the dot-com era.
03:04And then he made a name for himself as the co-founder of LinkedIn, the well-known professional
03:09network that was acquired by Microsoft for over $20 billion.
03:13Reid has since been a key investor at a venture capital firm called Greylock where he actually
03:18brought in open AI and at that time it was a non-profit with no business plan.
03:23So the firm actually didn't invest, but Reid invested early on through his foundation and
03:28he's been involved with a number of other important AI companies.
03:32He also has stood out to us on a radar because he has done podcasts and books where he's
03:37used AI tools himself.
03:40He wrote a book using ChatGPT and he has met with everyone from President Joe Biden to
03:46Commerce Secretary Gina Raimondo to Rishi Sunak in the UK.
03:51He's had multiple breakfasts with French President Emmanuel Macron.
03:55If one could say I'm spending 110% of my time on AI, that would be somewhat accurate.
04:01Just because the question around what does it mean for us, not just as investors, you
04:06know, because we have this enormous focus on this at Greylock, but also from society
04:11impact.
04:12What does it mean for the future work?
04:13What does it mean for how individuals navigate their lives?
04:17And it's something that I started a number of years ago when I had the fortune to be
04:20part of the founding team for OpenAI for how that was going to pull together.
04:26And it was the, okay, AI, which has been described to be here a number of times, is now really
04:33here.
04:34And so what does it mean for us as human beings?
04:37Now that humanist hat might surprise people as it really matters as an investor.
04:42And so it's the entire range.
04:44And so it goes everything from, you know, co-founding a company, the first since LinkedIn,
04:49to, you know, helping academic institutions or nonprofits, in addition of obviously governments,
04:57navigate this new world of how AI is transforming us.
05:03Reid Hoffman calls himself an optimist.
05:05He actually says that he was a techno-optimist before Marc Andreessen was.
05:10And Reid believes that overall AI is going to have the same exciting impacts for society
05:15that someone like Vinod Khosla would.
05:18He thinks that it could lead to great breakthroughs for medicine, for education, for business
05:24as well.
05:25But Reid compares AI to a automobile that you are driving from Los Angeles to San Francisco.
05:31It's faster than before.
05:33It's cooler than before.
05:34You want to see what it can do.
05:35But at the same time, he believes in perhaps slowing down at corners.
05:39And for that, he means cutting edge technology like these models from open AI that he thinks,
05:45you know, at a certain level of proficiency could have security impact.
05:49For example, creating a bioweapon.
05:52You know, Hoffman says for a lot of things, if they're relatively harmless use cases,
05:57you can put the genie back in the bottle.
05:59But if a model somehow allowed you to become a bioweapons expert and 100 million people
06:04got access to it, it would be very hard to put that genie back in the bottle.
06:10On the flip side, there's Mark Andreessen, the co-founder of Netscape and Andreessen
06:14Horowitz, an influential venture capital firm.
06:17Mark believes that open source is the way to go.
06:20Andreessen believes that the market itself can sort of dictate what is safe, what is
06:25allowable here.
06:26You don't need new regulation and new rules, especially at the level of these models themselves
06:32to keep people safe, keep progress moving.
06:35In fact, Mark even wrote in an essay that anyone who tries to slow down AI development
06:40is basically murdering people because it is slowing down the life-saving potential of AI.
06:46It's important to note that almost everyone we talked to about AI, including all these
06:50Midas List investors, consider themselves optimists in some way about AI.
06:55Where they start to disagree is about how powerful these tools are, how dangerous they
06:59could be, and what kind of regulation or guardrails should be placed on them.
07:07In reporting this story, we went down to Washington, D.C., to a conference called the Hill and
07:11Valley Forum, where we saw Khosla and a bunch of other leaders from AI speak with lawmakers.
07:17What's so interesting about this is that you actually have lawmakers on both sides of the
07:20aisle, Republican and Democrat, trying to learn a lot about AI and decide what U.S.
07:25policy should be along these lines.
07:28Perhaps the biggest nightmare is the looming new industrial revolution, the displacement
07:34of millions of workers.
07:36Interesting.
07:37Should we have a license required for these tools?
07:39We're going to have all kinds of misinformation.
07:42What kind of an innovation is it going to be?
07:44Are you considering protections for content generators and creators?
07:50So there's clearly implications of AI in national security, whether it's surveillance through
07:57TikTok or through Hawaii telecommunications equipment or just cyber warfare.
08:05Also things like drones weaponized by AI.
08:08So there's clearly a very large risk and race there.
08:12Even larger than that is the social impact of a race.
08:19If China wins the race, then they will get to convey economic benefits to many other
08:26parts of the world.
08:28That's why I think this race with China is so important and we should be acknowledging
08:33it's a race and not trusting the process and not relying on things like treaties, etc.
08:40If you're in a probability increasing of, I'll use the shorthand, a Terminator universe,
08:46you want to know that that's a probability increasing and you want to be doing things
08:50to decrease that probability.
08:52The thing of where it just accidentally happens, a la the Terminator movie, is I think highly
08:59implausible.
09:00The mistake I think of this dialogue is that it's the robots are coming rather than the
09:05humans are coming with the robots.
09:08Just as AI is a human amplifier of good people, it's also an amplifier of bad people.
09:14So you have to have that navigation for how do we help as many good people, doctors, other
09:19things, and as few bad people, terrorists, rogue states, criminals.
09:25That's part of the reason why engaging in the conversations with the AI doomers is useful
09:32because you're finding what would be the thing we'd be looking for if your science fiction
09:38example suddenly increased from low probability science fiction to something growing.
09:45And then how would we navigate that?
09:46Because again, I think we could navigate it.
09:48I'm sympathetic with the concern and not sympathetic with the process and the tools.
09:55So Vinod believes that AI should be able to be developed largely ad hoc and freely, except
10:00with certain advanced models that should be regulated by the government.
10:04And he says that at this sort of cutting edge, what he calls frontier models, AI should be
10:08regulated and perhaps controlled so that you don't have these models getting shared into
10:13potential enemies of the U.S. such as China, North Korea, Iran.
10:17Reid Hoffman believes that these models should be protected and regulated so that you don't
10:23accidentally create the next COVID-19 or some sort of bioweapon that would affect millions
10:29of people.
10:30He sees these models as just becoming so fricking powerful that we can't keep up with controlling
10:36them after we've released them.
10:38To hear the other point of view, we caught up with Bill Gurley, a former Midas lister
10:41who now lives in Texas.
10:45It's remarkably interesting that the biggest doomers are the ones that have the biggest
10:50financial stakes and the biggest foundational model.
10:53Like if you were coming to me and saying, here's a panel of 15 academics who've got
10:57nothing in this, who all think this is what should happen, then it'd be a different conversation.
11:03But you have people, this is, you know, Vinod will tell you he's the earliest, largest investor
11:08in open AI.
11:09Once again, that's a pretty ironic name, as Elon has pointed out.
11:16But he's also, Vinod also plays this card, which I really dislike, which is this vilification
11:23of China.
11:24The entrepreneurs in China are remarkable and very talented.
11:29And it's not as rocket science as you want to believe.
11:32There's already enough information out there for them to have their own open source models.
11:36And so if you pull up and say there's no open source here and you leave open source
11:42there, guess where it's going to be the most innovative, where you're going to have the
11:45most entrepreneurs, where you're going to have the most innovation, it's going to be
11:48over there.
11:49And we're going to have created this muddy, you know, wasteland where entrepreneurs can't
11:54move around.
11:55The thing I would ask these people is why do they have confidence that Washington would
12:01be good at regulating something this nascent?
12:03Like why would they be effective?
12:05The only rules that they want are the ones that they wrote.
12:10These models have been out there for a couple of years, and we haven't had massive, you
12:15know, crimes.
12:16I mean, there's incidents, but there's incidents with every new technology that's come along.
12:21Crimes are still crimes.
12:22If I defraud you with AI, it's still illegal as it would have been if I had done it with
12:26a phishing scam before AI.
12:29And so why do we need new laws?
12:31Why do we need new agencies?
12:34Let's just use the laws that are on the book.
12:38One thing that I really picked up on reporting the story was that folks like Bill Gurley
12:42and Martin Casado felt a huge sense of urgency.
12:46This is the time to be talking about this stuff, according to them, which is kind of
12:49interesting because a lot of people agree that these AI tools are super early.
12:54You know, we are seeing right now that Google's AI tools are, you know, telling people to
12:58put glue on pizza.
13:01You know, we're seeing open AI have all sorts of issues with its tools as well.
13:05You know, this is an early technology with a lot of bumps.
13:08And outside the tech world, plenty of people think that these AI tools aren't that useful
13:13at all.
13:14So if it's such early days, why is it so important that we'd be having this debate right now?
13:19And a lot of the people we spoke to basically feel like this is the moment that we kind
13:23of dictate the conversation, that the tone that will affect the next few years is being
13:29set right now.
13:31We are going to have to have policies here.
13:33Some of the policies may just be kind of clarifications or extensions of current policies.
13:38Some of the policies will be new policies we're creating as revolutionary technology,
13:42not surprisingly, that will require some new thinking, some new balances of what balances
13:47between, you know, kind of citizens' interests and consumers' interests and, you know, kind
13:53of industry interests and government interests and how do you bring all those together.
13:57And that's part of what policy does, is to say, let's make sure that these are shaped
14:02in good ways.
14:03But I think it's very early, and anybody who thinks they know the right policy shape right
14:09now is either deluding themselves or deluding you.
14:14And so we have to learn it together, and that's the really important part.
14:17Regulation is the friend of the incumbent.
14:20It's not time for the government to start picking winners.
14:22And one thing that your readers should be aware of is even Reed and Vinod are spending
14:29a ton of time in Washington, and they've put money in super PACs.
14:34I've spent 25 years in this business and not had to go kowtow to Washington to go placate,
14:45you know, different senators and congressmen.
14:49I think it'd just be a horrible step for the venture industry for this to be the first
14:53move when any new wave comes along is for people to run and become friends with the
15:00government and try and lock in their winners.
15:02I think that's just a horrible precedent.
15:04There'll be some refinement of policy and implementation of these laws, maybe some new
15:11legislation over time.
15:15But currently, I think they are sufficient.
15:18That doesn't say we won't have litigation.
15:20We will have litigation because people have different points of view.
15:24But I do think we don't fundamentally need to restructure what needs to be done.
15:30There's a very optimistic future that's possible.
15:34I have a dozen forecasts about plausible tomorrows, and I'm pretty optimistic.

Recommended