Category
🗞
NewsTranscript
00:00I'll ask a question here. Okay, so you're saying that those AI systems are going to be uncontrollable, and also it's hard to regulate. So what is the solution if it is not regulation in your mind?
00:13You already made fun of that solution, basically not building superintelligent systems. I'm not saying it's actually going to happen. I'm just saying if we were a smarter civilization, we would not create our own replacements. But I'm pretty sure you're right, and we will create those systems. And I don't think there is a technical solution to indefinitely control superintelligent systems of any capability. We may be good at controlling GPT-5, maybe 6, but at some point in the future, this process doesn't stop. They become more capable.
00:43We will make a fatal mistake. So technically, you cannot solve it as a permanent thing. You're creating a, by analogy, with perpetual motion machine, a perpetual safety machine. In cybersecurity, if you make a mistake, it's not a big deal. We'll give you a new credit card to replace your account. If you're talking about existential problems, one very big mistake is the last one you'll ever make. So there is no technical solution.
01:07And governance is security fear. Making it illegal to create deadly AI does not actually make a difference.