Will artificial intelligence save us or kill us all? In Japan, AI-driven technology promises better lives for an aging population. But researchers in Silicon Valley are warning of untamable forces being unleashed– and even human extinction.
Category
🗞
NewsTranscript
00:00There is a significant risk of human extinction from advanced AI systems.
00:15Japan now faces very severe aging problems.
00:18I would like to solve these problems.
00:21We don't currently know how to steer these systems, how to make sure that they robustly
00:25understand in the first place, or even follow human values once they do understand them.
00:31I love the technologies.
00:33I would like to create such kind of technologies to contribute to the human society.
00:40Even the brain nerve systems can be connected to the cyberspace.
00:45This is a rapidly evolving technology.
00:47People do not know how it currently works, much less how future systems will work.
00:50We don't really have ways yet to make sure that what's being developed is going to be
00:54safe.
00:55This AI recognizes human beings also as one of the important living things.
01:03This very small group of people are developing really powerful technologies we know very
01:07little about.
01:08People's concerns about generative AI wiping out humanity stem from a fear that if left
01:14unchecked, AI could potentially develop advanced capabilities and make decisions that are harmful
01:19to humans.
01:21As the world grapples with the implications of this rapidly evolving field, one thing
01:25is certain.
01:26The impact of AI on humanity will be profound.
01:51With new AI technologies, you can realize a fusion between the human side and technology
02:20side.
02:22This one is the world's first wearable cyborg.
02:29Cyberdyne is trying to create the very innovative cybernext technologies, especially focusing
02:34on the medical and healthcare fields for the human and human societies.
02:47My name is Yoshiyuki Sankai.
02:50I'm a professor of the University of Tsukuba, Japan, and also the CEO of Cyberdyne.
02:56Let's create the bright futures for the human and human societies with such kind of AI systems.
03:15I personally want to have an impact on making the world better, and working on AI safety
03:20certainly seems like one of the best ways to do that right now.
03:24Any public intellectuals, many professors, scientists across industry and academia recognize
03:30that there is a significant risk of human extinction from advanced AI systems.
03:38We've seen in recent years rapid advancements in making AI systems more powerful, bigger,
03:43more generally competent and able to do complex reasoning, and yet we don't have comparable
03:48progress in safety guardrails, or monitoring, or evaluations, or ways to know that these
03:53powerful systems are going to be safe.
03:56My name is Gabriel McCoby, or Gabe you can call me.
03:59I'm a grad student at Stanford, and I do AI safety research and I lead Stanford AI alignment.
04:08This is our student group and research community focused on mitigating the risks of advanced
04:13AI systems.
04:14People like mitigating weapons of mass destruction, you know?
04:17It's a good thing.
04:18It's a good thing.
04:19Everyone gets behind that.
04:20These more catastrophic risks unfortunately do seem pretty likely.
04:23Many leading scientists tend to put some single digit or sometimes double digit chances of
04:29existential risks from advanced AI.
04:32Other possible worst cases could include not extinction events, but other very bad things
04:36like locking in totalitarian states, or disempowering many people, concentrating power to where
04:44many people do not get a say in how AI will shape and potentially transform our society.
04:53AI has become such a divisive topic.
04:55There are a lot of valid concerns.
04:57Some believe it could lead to job losses, increased inequality, and even unethical uses
05:02of AI.
05:03However, AI also has tremendous potential to benefit humanity.
05:08It could help us tackle some of the world's biggest problems such as climate change, disease,
05:14and poverty.
05:38It could help us tackle some of the world's biggest problems such as climate change, disease,
05:45and poverty.
05:46It could help us tackle some of the world's biggest problems such as climate change, disease,
05:52and poverty.
05:53It could help us tackle some of the world's biggest problems such as climate change, disease,
05:58and poverty.
05:59It could help us tackle some of the world's biggest problems such as climate change, disease,
06:04and poverty.
06:06In Europe and the United States, stroke and spinal cord injuries have been reported.
06:19PAL detects the very important human's intention signals from the brain to the peripherals.
06:25If the human wishes to move, then the brain generates the intention signals.
06:30These intention signals are transmitted through the spinal cord, motor nerve, to the muscle,
06:35then finally we can move.
06:39How systems and humans always work together.
06:4320 countries now use these devices as a medical device.
06:49I think there's definitely great ways AI technology is used in medicine.
06:54For example, there's cancer detection that's possible because of image recognition systems
06:59using AI.
07:00That allows for detection without invasive tests, which is really fantastic, and early
07:05detection as well.
07:07No technology is inherent in AI.
07:12Of course, we should be thinking about long-term impact in terms of the direction in which
07:16we're taking the technology.
07:18But at the same time, we also need to think about it less in a technical sense, and more
07:24in terms of its impact on the human body.
07:27So, we need to think about how we're going to use AI in medicine.
07:31We need to think about how we're going to use AI in medicine.
07:34We need to think about how we're going to use AI in medicine.
07:37Less in a technical sense, and more in terms of it impacting real-life humans today.
07:47Japan, I think, is quite optimistic about AI technology.
07:52There's a lot of hype at the moment.
07:54It's like a shiny new toy that everybody wants to play with.
07:57Whenever I go to the US or Australia or in the EU countries, there's far more a knee-jerk
08:03kind of fear or concern.
08:05I was quite surprised, to be honest.
08:12Meetings on Wednesday are every Wednesday.
08:15So, there's usually some guests we bring in or some other SIA researcher who presents.
08:19Then we have Bobo afterwards.
08:21That's awesome.
08:22Yeah, it's a good deal.
08:23Kind of like a research lab.
08:25I happen to have an HDMI adapter to USB-C.
08:28Something to plug in.
08:30Oh, you did plug in.
08:32Never mind.
08:33Sorry, I'm hallucinating.
08:34I'll pass it off to our speaker, Dan Hendricks.
08:40The Wednesday meetings are really good for inviting new people, too.
08:43It's nice to meet some new students, talk about why you're interested in AI safety or not.
08:49So, if you're wanting to synthesize smallpox, or if this is a chemical place like mustard gas,
08:55you can do that.
08:56Access is already high, and it will just be increasing across time.
09:01But there's still an issue of needing skills.
09:05So, basically, you need something like a top PhD in virology to create a new pandemic that could take down civilization.
09:18There are some sequences online, which I won't disclose, that could kill millions of people.
09:23Actually, more dangerous.
09:25Yes?
09:26So, with the access thing, a lot of people bring up labs.
09:28And, oh, you maybe don't just need to be a top PhD.
09:30You also need some kind of biolab to do experiments.
09:32Is that still a thing?
09:34So, this would...
09:36It depends on how good the cookbook is, for instance.
09:43Excuse me.
09:44Certainly, there are people who come in with disagreements.
09:46They're like, oh, powerful AI is not coming for a long time, or it doesn't seem important to work on these things.
09:53We could just...
09:54Let's build an accelerator or whatever.
09:57There's a large potential, especially for people doing engineered pandemics, to cause a wide range of harm in the coming years.
10:03Now, there are other instances of catastrophic misuse that people are expecting, too.
10:07One is with cyber attacks.
10:09We might have AI systems in the coming years that are really good at programming, but also really good at exploiting zero-day vulnerabilities, exploiting software vulnerabilities in secure systems.
10:27Maybe the top use case of AI will be making money.
10:31You might see a lot of people being defrauded of money.
10:33You might see a lot of attacks on public infrastructure, threats against individuals in order to extort them.
10:40It could be a wild west of digital cyber attacks in the coming years.
10:45Beyond that, though, there is a pretty big risk that AI systems could actually get out of the control of their developers.
10:50We don't currently know how to steer these systems, how to make sure that they robustly understand in the first place, or even follow human values once they do understand them.
10:59They might learn to value things that are not exactly aligned with what we want as humans, like having Earth be suitable for life on it, or making it more sustainable.
11:10I was fortunate to have a very supportive family.
11:13Especially a few years ago, AI safety was a lot less mainstream.
11:16So there was always some uncertainty of, hey, is this going to be actually something that's helpful in the first place?
11:21Are you going to have a stable job, things like this?
11:23But as time has gone on, as we've seen a lot more of AI systems, we've seen a lot more of AI systems being used by people.
11:31Unfortunately, some of the worst case risks, a lot of experts think there's a pretty significant chance of them.
11:35Many scientists put single or double digit chances of existential risks from advanced AI on the top of their list.
11:42And so, we're seeing a lot of people raising the alarm for AI safety and AI risk.
11:46It tends to be like every few days, my mom will send me something like, hey, have you seen this new thing?
11:51Unfortunately, some of the worst case risks, a lot of experts think there's a pretty significant chance of them.
11:56Many scientists put single or double digit chances of existential risks from advanced AI.
12:00There's a recent interview where the US FTC chair said that she's an optimist, so she has a 15% chance that AI will kill everyone.
12:10My vision is a little bit different.
12:13We could create the AI systems.
12:16This one is the one newly created species, I think.
12:23A generative AI system is different from simple programming systems.
12:28It has growing up functions.
12:36This AI recognizes human beings as one of the important living things, like one of the animals.
12:45And because the human is also one of the living things, they recognize the importance of the humans.
12:52They try to keep our societies, our cultures and circumstances.
12:59We human beings have some problems, aging problems or disease and accidents.
13:05AI systems or some technologies with AI systems will support some functions.
13:22How are you?
13:23I'm fine.
13:24You're a little bit noisy.
13:26How is your grandchild doing?
13:31He's doing well.
13:32That's great.
13:34I'm so happy to hear that you're so healthy.
13:40Japan now faces very severe aging problems.
13:44The average age of the workers in agricultural fields is now almost over 70 years old.
13:52Average.
13:53Wow.
13:54One, two, three, four.
13:59Let me fix your tire.
14:02I'll put it here.
14:06It's not going up.
14:07No, it's going up well.
14:09Be careful.
14:10Here we go.
14:11When my wife passed away, I was determined to take care of her for the rest of my life.
14:20It took a while.
14:22Yes.
14:23It took too long.
14:25You shouldn't look down.
14:27I quit my job to take care of my wife.
14:42It's okay.
14:47At first, the doctor told me that I couldn't walk.
14:56He also told me that I couldn't live without a caretaker.
15:07The director told me that an AI robot would come.
15:13Come here.
15:17I didn't think I'd be able to walk so fast.
15:20I was able to go back to my normal life in just two months.
15:26I'm running after the surgery.
15:31Yes.
15:33I would like to solve these aging society's problems.
15:38In my childhood, my mother bought me a microscope or some electrical parts.
15:44Every day, I spent a lot of time to have such kind of experiments and challenges.
15:52I love to read science fiction books.
15:57Written by Isaac Asimov.
16:05If you've heard about AI in the last couple years, chances are the technology you heard about was developed here.
16:13The breakthroughs behind it happened here.
16:15The money behind it came from here.
16:18The people behind it live here.
16:20It's really all been centered here in the Bay Area.
16:25A lot of the startups that are at the leading edge of AI, so that's OpenAI, Anthropic, Inflection, names you might not yet be familiar with.
16:41They are backed by some of the big companies you already know that are at the top of the stock market.
16:47That's Microsoft, Amazon, Meta, Google.
16:52These companies are based here, many of them in the Bay Area.
17:00For all of the discussion that we've seen about AI policy, there's actually very little that tech companies have to do.
17:09A lot of it is just voluntary.
17:11What we are really depending on as guardrails is the benevolence of the companies themselves.
17:19Gabe, I think, is an example of a lot of the young people who are coming to the movement now, who are not ideological, who are really interested in the technology, who are aware of its potential harms, and see this as the most important thing that they could do with their time, their opportunity to work on what many of them call the Manhattan Project.
17:48You have to realize that unlike some other very general technologies that have been developed in the past, AI is mostly being pushed, especially the frontier systems, by a small group of scientists in San Francisco.
18:03And this very small group of people are developing really powerful technologies we know very little about.
18:08Some of this maybe comes from a lot of historical techno-optimism among especially the startup landscape in the Bay Area.
18:15A lot of people are kind of used to this move fast and break things paradigm that sometimes ends up making things go well.
18:22But as is the case if you're developing a technology that affects society, you don't want to move so fast that you actually break society.
18:39Pause.ai wants a global and indefinite pause on the development of frontier artificial general intelligence.
18:50So we're putting up posters so that people can get more information.
18:53The AI issue is complicated. A lot of the public does not understand it.
18:57A lot of the government does not understand it.
18:59You know, it's really hard to keep up with the development.
19:01Another interesting thing is that most of us working on this have no experience in activism.
19:08What we have mostly is like technical knowledge and familiarity with AI that makes us concerned.
19:13AI safety is still very much a minority.
19:18And then actually a lot of the biggest AI safety names are working at AI labs.
19:23You know, I think some of them do great work, but they're still much more under the influence of the broader corporation that's driving toward development.
19:31I think that's a problem. I think that somebody from the outside ought to be telling them like what they need to do.
19:36And unfortunately, the case with AI now is that like there aren't external regulatory bodies that are really up to the task of regulating AIs.
19:45From the same mouth you're hearing, this thing could kill us all and I am going to keep building it.
19:54I think part of the reason you have so much resistance to the AI safety movement is because of the dissonance between people who talk about their genuine fear of the consequences and the risks to humanity if they build this AI god.
20:20So much of the debate around here has these really religious undertones.
20:24That's part of why they say that it can't be stopped and shouldn't be stopped.
20:30It really feels like, you know, and they talk about it in that way, like I'm building a god and they're building it in their own image, right?
20:51I love the human society and I love the science fiction.
20:57I would like to create such kind of technologies to contribute to the human society.
21:05So I love to read the science fiction books and also I love to see the movies in science and fiction.
21:15Terminator movies also, one of them also, yes.
21:21But unfortunately, some movies in US or European areas, most of the cases, technologies always attack the humans.
21:31In the actual fields, technologies should be a work for the human and human society.
21:39In the movie The Terminator, classic movie,
21:42Cyberdyne is a fictional tech company that created the software for the Skynet system, the AI system that becomes self-aware and goes rogue.
21:50Cyberdyne's role in the story is to represent the dangers of AI getting out of control and to serve as a cautionary tale for the real world.
22:00Is Cyberdyne named after the firm in Terminator?
22:04No. In Terminator stories, that company's name is Cyberdyne Systems.
22:12Yes.
22:14Obviously, at some literal level, maybe you can unplug some advanced AI systems and there are definitely a lot of hopes people are trying to actively make it easier to do that.
22:22Some of the regulation now is focused on making sure that data centers have some good off switches because currently a lot of them don't.
22:29In general, this might be more tough than people realize in the future.
22:32We might be in a state in the future where we have pretty advanced AI systems widely distributed throughout the economy, throughout people's livelihoods.
22:39Many people might even be in relationships with AI systems and it could be really hard to convince people that it's okay to unplug some widely distributed system like that.
22:46There are also risks of having a military arms race around developing autonomous AI systems where we might have many large nations developing wide stockpiles of autonomous weapons.
22:58And if things go bad, just like in the nuclear case where you could have this really big flash war that destroys a lot of the world,
23:04you might have a bad case where very large stockpiles of autonomous weapons suddenly end up killing a lot of people from very small triggers.
23:13So probably a lot of catastrophic misuse will involve humans in the loop in the coming years.
23:18They could involve using very persuasive AI systems to convince people to do things that they otherwise would not do.
23:23They could involve extortion or cyber crimes or other ways of compelling people to do work.
23:28Unfortunately, probably a lot of the current ways that people are able to manipulate other people in order to do bad things might also work with people using AI or AI itself manipulating people to do bad things.
23:39Like blackmail?
23:40Like blackmail, yeah.
23:58We have similar excellent brain and technologies and partners.
24:03Now we are here.
24:05So what's next?
24:07We human beings, homo sapiens, obtain the new brains.
24:13Additionally, the original brain plus brains in the cyberspace.
24:21Also, we fortunately have the new partners, AI friends and robots and so on.
24:28Robotic dog also.
24:32What worries me a little bit more about this whole scenario is that AI technology doesn't necessarily need to be a tool for global capitalism, but it is.
24:45It's the only way in which it's kind of being developed.
24:48And so in that model, of course, we're going to be repeating all the kind of things that we've already done in terms of empire building and people being exploited, natural resources being extracted.
25:01All these things are going to repeat itself because AI is only another kind of thing to exploit.
25:09I think we need to think about not just as humans who are inefficient, humans that are unpredictable, humans that are unreliable, but finding beauty or finding value in the fact that we are unpredictable, that we are unreliable.
25:26So probably like most emerging technologies, there will be disproportionate impacts on different kinds of people.
25:34A lot of the global south, for example, hasn't had as much say in how AI is being shaped and steered.
25:40At the same time, though, some of these risks are pretty global.
25:43When we especially talk about catastrophic risks, these could literally affect everyone.
25:48If everyone dies and everyone is kind of a stakeholder here, everyone is potentially a victim.
25:5420% is the total correctness of the quizzes.
25:58Students versus how many non-CI students.
26:00Do you still plan to just keep doing research?
26:02I know there was like the PhD versus grad school.
26:05I am somewhat uncertain about grad school and things.
26:08I think I could be successful, but also maybe with AI timelines or with other considerations, trying to cash out impact in other ways might be more worth it.
26:22Median OpenAI salary, supposedly 900,000 US dollars, which is quite a lot.
26:30So yeah, it seems definitely the industry people have a lot of resources.
26:33And fortunately, all the top AGI labs that are pushing for capabilities also hire safety people.
26:40I think a reasonable world where people are making sure that emerging technologies are safe is necessarily going to have to have a lot of safeguards and monitoring.
26:49Even if there's a small risk, it seems pretty good to try to mitigate that risk further to make people more safe.
26:54Peace and the military side are very near.
26:58I carefully consider how to treat it.
27:01So when I was born, there is no AI systems or there's no computer systems.
27:08But current situations, the young people start their life with AI and robots and so on.
27:17Some technologies with AI will support their growing up processes.
27:23People have been pretty bad about predicting progress in AI.
27:26Ten years in the future, there might be even wilder paradigm shifts.
27:29People don't really know what's coming next.
27:31But I suppose David beat Goliath. There's still some chance.
27:36The vast majority of AI researchers are focused on building safe, beneficial AI systems that are aligned with human values and goals.
27:45While it's possible that AI could become super intelligent and pose an existential risk to humanity, many experts believe that this is highly unlikely.
27:54At least in the near future.
28:24Microsoft Mechanics
28:25www.microsoft.com
28:27www.microsoft.com
28:29www.microsoft.com
28:30www.microsoft.com