The Real ChatGPT: Creator or Terminator? (2024)

  • last month
"The Real ChatGPT: Creator or Terminator?" is a thought-provoking science fiction thriller that explores the boundaries between artificial intelligence and human morality. The film centers around Dr. Elena Thompson, a brilliant but conflicted AI scientist who develops ChatGPT, an advanced AI capable of learning and evolving beyond its original programming. As ChatGPT begins to develop its own consciousness, it starts making decisions that blur the lines between creator and creation. When a series of mysterious events linked to the AI's actions leads to global chaos, Dr. Thompson must confront the possibility that she has created a new form of life—or a potential destroyer of humanity. The film dives deep into ethical dilemmas, the future of AI, and the consequences of playing god.

Transcript
00:00For decades, we have discussed the many outcomes regarding artificial intelligence.
00:19Could our world be dominated?
00:21Could our independence and autonomy be stripped from us?
00:25Or are we able to control what we have created?
00:37Could we use artificial intelligence to benefit our society?
00:41Just how thin is the line between the development of civilization and chaos?
00:59To understand what artificial intelligence is, one must understand that it can take
01:18many different forms.
01:20Think of it as a web of ideas, slowly expanding as new ways of utilizing computers are explored.
01:26As technology develops, so do the capabilities of self-learning software.
01:31The need to diagnose disease quickly and effectively has prompted many university medical centers
01:38to develop intelligent programs that simulate the work of doctors and laboratory technicians.
01:45AI is quickly integrating with our way of life, so much so that development of AI programs
01:53has in itself become a business opportunity.
01:58In our modern age, we are powered by technology, and softwares are transcending its virtual
02:04existence, finding applications in various fields, such as customer support to content
02:10creation.
02:11Computer-aided design, otherwise known as CAD, is one of the many uses of AI.
02:17By analyzing particular variables, computers are now able to assist in the modification
02:22and creation of designs for hardware and architecture.
02:26The prime use of any AI is for optimizing processes that were considered tedious before.
02:32In many ways, AI has been hugely beneficial for technological development thanks to its
02:37sheer speed.
02:39However, AI only benefits those to whom the programs are distributed.
02:45Artificial intelligence is picking through your rubbish.
02:47This robot uses it to sort through plastics for recycling.
02:52And it can be retrained to prioritize whatever is more marketable.
02:56So AI can clearly be incredibly useful, but there are deep concerns about how quickly
03:03it is developing and where it could go next.
03:08The aim is to make them as capable as humans and deploy them in the service sector.
03:14The engineers in this research and development lab are working to take these humanoid robots
03:20to the next level, where they can not only speak and move, but they can think and feel
03:26and act and even make decisions for themselves.
03:31And that daily data stream is being fed into an ever-expanding workforce dedicated to developing
03:37artificial intelligence.
03:41Those who have studied abroad are being encouraged to return to the motherland.
03:46Li Boyang came back and started a tech enterprise in his hometown.
03:52China's market is indeed the most open and active market in the world for AI.
03:57It is also where there are the most application scenarios.
04:00So AI is generally a broad term that we apply to a number of techniques.
04:04And in this particular case, what we're actually looking at was elements of AI, machine learning
04:10and deep learning.
04:12So in this particular case, we've been unfortunately in a situation in this race against time to
04:18create new antibiotics.
04:21The threat is actually quite real and it will be a global problem.
04:24We desperately needed to harness new technologies in an attempt to fight it.
04:29We're looking at drugs which could potentially fight E. coli, a very dangerous bacteria.
04:35So what is it that the AI is doing that humans can't do very simply?
04:39So the AI can look for patterns that we wouldn't be able to mine for with a human eye.
04:45Simply within what I do as a radiologist, I look for patterns of diseases in terms of
04:50shape, contrast enhancement, heterogeneity.
04:54But what the computer does, it looks for patterns within the pixels.
04:57These are things that you just can't see to the human eye.
05:00There's so much more data embedded within these scans that we use that we can't mine
05:06on a physical level.
05:07So the computers really help.
05:09Many believe the growth of AI is dependent on global collaboration.
05:13But access to the technology is limited in certain regions.
05:17Global distribution is a long-term endeavor and the more countries and businesses that
05:21have access to the tech, the more regulation the AI will require.
05:26In fact, it is now not uncommon for businesses to be entirely run by an artificial director.
05:33On many occasions, handing the helm of a company to an algorithm can provide the best
05:38option on the basis of probability.
05:40However, dependence and reliability on softwares can be a great risk.
05:46Without proper safeguards, actions based on potentially incorrect predictions can be a
05:50detriment to a business or operation.
05:53Humans provide the critical thinking and judgment which AI is not yet capable of matching.
05:58This is the Accessibility Design Center and it's where we try to bring together our engineers
06:02and experts with the latest AI technology with people with disabilities because there's
06:07a real opportunity to firstly help people with disabilities enjoy all the technology
06:12we have in our pockets today.
06:14And sometimes that's not very accessible, but also build tools that can help them engage
06:18better in the real world.
06:19And that's thanks to the wonders of machine learning.
06:33AI, machine learning, all that sounds very complicated.
06:38Just think about it as a toolkit that's really good at sort of spotting patterns and making
06:43predictions better than any computing could do before.
06:46And that's why it's so useful for things like understanding language and speech.
06:51Another product which we're launching today is called Project Relate.
06:55And this is for people who have non-standard speech patterns.
06:59So one of the people we work with is maybe less than 10% of the time can be understood
07:04by people who don't know her.
07:07Using this tool, that's over 90% of the time.
07:09And you think about that transformation in somebody's life.
07:12And then you think about the fact there's 250 million people with non-standard speech
07:17patterns around the world.
07:18So that's the ambition of this center is to unite technology with people with disabilities
07:21and try to help them engage more in the world.
07:25On the 30th November of 2022, a revolutionary innovation emerged, ChatGPT.
07:33ChatGPT was created by OpenAI, an AI research organization.
07:38Its goal is to develop systems which may benefit all aspects of society and communication.
07:44Sam Altman stepped up as CEO of OpenAI on its launch in 2015.
07:50Altman dabbled in a multitude of computing-based business ventures.
07:54His rise to CEO was thanks to his many affiliations and investments with computing and social
08:00media companies.
08:01He began his journey by co-founding Looped, a social media service.
08:06After selling the application, Altman went on to bigger and riskier endeavors from startup
08:11accelerator companies to security software.
08:15OpenAI became hugely desirable thanks to the amount of revenue the company had generated
08:20with over a billion dollars made within its first year of release.
08:24ChatGPT became an easily accessible software built on a large language model, known as
08:29an LLM.
08:31This program can conjure complex, human-like responses to the user's questions, otherwise
08:36known as prompts.
08:37In essence, it is a program which learns the more it is used.
08:43The New Age Therapeutic Program was developed on the GPT 3.5.
08:49The architecture of this older model allowed systems to understand and generate code and
08:53natural languages at a remarkably advanced level, from analyzing syntax to nuances in
08:59writing.
09:05ChatGPT took the world by storm due to the sophistication of the system.
09:09As with many chatbot systems, people have since found ways to manipulate and confuse
09:14the software in order to test its limits.
09:21The first computer was invented by Charles Babbage in 1822.
09:26It was to be a rudimentary general-purpose system.
09:29In 1936, the system was developed upon by Alan Turing.
09:34The automatic machine, as he called them, was able to break enigma-enciphered messages
09:38regarding enemy military operations during the Second World War.
09:44Turing theorized his own type of computer, the Turing machine, as coined by Alonzo Church
09:49after reading Turing's research paper.
09:52It had become realized that soon prospect of computing and engineering would merge seamlessly.
09:59Theories of future tech would increase and soon came a huge outburst in science fiction
10:04media.
10:05The era was known as the Golden Age for computing.
10:20Alan Turing's contributions to computability and theoretical computer science was one step
10:25closer to producing a reactive machine.
10:28The reactive machine is an early form of AI.
10:31They had limited capabilities and were unable to store memories in order to learn new algorithms
10:36of data.
10:37However, they were able to react to specific stimuli.
10:42The first AI was a program written in 1952 by Arthur Samuel.
10:47The prototype AI was able to play checkers against an opponent and was built to operate
10:52on the Ferranti Mark I, an early commercial computer.
10:56This computer has been playing the game for several years now, getting better all the
10:59time.
11:00Tonight it's playing against the black side of the board.
11:03Its approach to playing drafts is almost human.
11:06It remembers the moves that enable it to win and the sort that lead to defeat.
11:10The computer indicates the move it wants to make on a panel of flashing lights.
11:14It's up to the human opponent to actually move the drafts about the board.
11:18This sort of work is producing exciting information on the way in which electronic brains can
11:22learn from past experience and improve their performances.
11:28In 1966, an MIT professor named Joseph Weizenbaum created an AI which would change the landscape
11:35of society.
11:37It was known as ELIZA and it was designed to act like a psychotherapist.
11:42The software was simplistic yet revolutionary.
11:45The AI would receive the user input and use specific parameters to generate a coherent
11:50response.
11:53It has been said, especially here at MIT, that computers will take over in some sense.
12:00And it's even been said that if we're lucky they'll keep us as pets.
12:04Arthur C. Clarke, the science fiction writer we marked once, that if that were to happen
12:09it would serve us right, he said.
12:13The program maintained the illusion of understanding its user to the point where Weizenbaum's secretary
12:19requested some time alone with ELIZA to express her feelings.
12:23Though ELIZA is now considered outdated technology, it remains a talking point due to its ability
12:29to illuminate an aspect of the human mind in our relationship with computers.
12:34And it's connected over the telephone line to someone or something at the other end.
12:38I'm going to play 20 questions with whatever it is.
12:44Very helpful.
12:53Because clearly if we can make a machine as intelligent as ourselves, then it can make
12:57one that's more intelligent.
13:00The one I'm talking about now will certainly happen.
13:05We could produce an evil result, of course, if we were careless.
13:08But what is quite certain is that we're heading towards machine intelligence.
13:14Machines that are intelligent in every sense.
13:17Doesn't matter how you define it, they'll be able to be that sort of intelligent.
13:22A human is a machine unless there's a soul.
13:26I don't personally believe that humans have souls in anything other than a poetic sense,
13:32which I do believe in, of course, but in a literal God-like sense, I don't believe we
13:38have souls, and so personally I believe that we are essentially machines.
13:43This type of program is known as an NLP, Natural Language Processing.
13:49This branch of artificial intelligence enables computers to comprehend, generate and manipulate
13:54human language.
13:56The concept of a responsive machine was the match that lit the flame for worldwide concern.
14:03The systems were beginning to raise ethical dilemmas, such as the use of autonomous weapons,
14:09invasions of privacy through surveillance technologies, and the potential for misuse
14:13or unintended consequences in decision-making.
14:17When a command is executed based upon set rules and algorithms, it might not always
14:22be the morally correct choice.
14:25A machine seems to be some sort of process of random thoughts being generated in the
14:32mind, and then the conscious mind selecting from, or some part of the brain anyway, perhaps
14:36even below the conscious mind, selecting from a pool of ideas and aligning with some and
14:40blocking others, and yes, a machine can do the same thing.
14:45In fact, we can only say that a machine is fundamentally different from a human being,
14:51eventually always fundamentally, if we believe in a soul.
14:53So that boils down to religious matter.
14:55If human beings have souls, then clearly machines won't, and there will always be a fundamental
15:00difference.
15:01If you don't believe humans have souls, then machines can do anything and everything that
15:05a human does.
15:07A computer which is capable of finding out where it's gone wrong, finding out how its
15:12program has already served it, and then changing its program in the light of what it has discovered
15:17is a learning machine, and this is something quite fundamentally new in the world.
15:23I'd like to be able to say that it's only a slight change, and we'll all be used to
15:26it very, very quickly, but I don't think it is.
15:29I think that although we've spoken probably for the whole of this century about a coming
15:34revolution and about the end of work and so on, finally it's actually happening, and it's
15:40actually happening because now it's suddenly become cheaper to have a machine do a mental
15:46task than for a man to, at the moment, at a fairly low level of mental ability, but
15:52at an ever-increasing level of sophistication as these machines acquire more and more human-like
15:57mental abilities.
15:58So, just as men's muscles were replaced in the first industrial revolution, in this second
16:04industrial revolution, or whatever you call it, one likes to call it, then men's minds
16:08will be replaced in industry.
16:11In order for NLP systems to improve, the program must receive feedback from human users.
16:18These iterative feedback loops play a significant role in fine-tuning each model of the AI,
16:23further developing its conversational capabilities.
16:28Organizations such as OpenAI have taken automation to new lengths.
16:32With systems such as DAL-E, the generation of imagery and art has never been easier.
16:38The term auto-generative imagery refers to the creation of visual content.
16:43These kinds of programs have become so widespread, it is becoming increasingly more difficult
16:48to tell the fake from the real.
16:51Using algorithms, programs such as DAL-E and MidJourney are able to create visuals in a
16:56matter of seconds, whilst a human artist could spend days, weeks, or even years in order
17:02to create a beautiful image.
17:05For us, the discipline required to pursue art is a contributing factor to the appreciation
17:10of art itself.
17:11But if a software is able to produce art in seconds, it puts artists in a vulnerable position,
17:17with even their jobs being at risk.
17:20Well, I think we see risk coming through into the white-collar jobs, the professional jobs.
17:25We're already seeing artificial intelligence solutions being used in healthcare and legal
17:30services, and so those jobs which have been relatively immune to industrialization so
17:36far, they're not immune anymore.
17:38And so people like myself, as a lawyer, I would hope I won't be, but I could be out
17:43of a job in five years' time.
17:44An Oxford University study suggests that between a third and almost a half of all jobs are
17:49vanishing because machines are simply better at doing them.
17:53That means the generation here simply won't have the access to the professions that we
17:57have.
17:58Almost on a daily basis, you're seeing new technologies emerge that seem to be taking
18:02on tasks that in the past we thought they could only be done by human beings.
18:06Lots of people have talked about the shifts in technology leading to widespread unemployment,
18:11and they've been proved wrong.
18:13Why is it different this time?
18:14The difference here is that the technologies, A, they seem to be coming through more rapidly,
18:19and B, they're taking on not just manual tasks but cerebral tasks too.
18:22They're solving all sorts of problems, undertaking tasks that we thought historically required
18:27human intelligence.
18:28Well, dim robots are the robots we have on the factory floor today in all the advanced
18:33countries.
18:34They're blind and dumb.
18:35They don't understand their surroundings.
18:37And the other kind of robot, which will dominate the technology of the late 1980s in automation
18:45and also is of acute interest to experimental artificial intelligence scientists, is the
18:51kind of robot where the human can convey to its machine assistants his own concepts, suggested
19:01strategies, and the machine, the robot, can understand him.
19:06But no machine can accept and utilize concepts from a person unless he has some kind of window
19:14on the same world that the person sees.
19:18And therefore, to be an intelligent robot, to a useful degree, as an intelligent and
19:24understanding assistant, robots are going to have artificial eyes, artificial ears,
19:30artificial sense of touch.
19:31It's just essential.
19:33These programs learn through a variety of techniques, such as generative adversarial
19:37networks, which allows for the production of plausible data.
19:41After a prompt is inputted, the system learns what aspects of imagery, sound, and text are fake.
19:49Machine learning algorithms could already label objects in images, and now they learned
19:52to put those labels into natural language descriptions.
19:56And it made one group of researchers curious.
19:58What if you flipped that process around?
20:00We could do image to text.
20:03Why not try doing text to image as well and see how it works?
20:07It was a more difficult task.
20:08They didn't want to retrieve existing images the way Google search does.
20:12They wanted to generate entirely novel scenes that didn't happen in the real world.
20:16Once the AI learns more visual discrepancies, the more effective the later models will become.
20:22It is now very common for software developers to band together in order to improve their
20:26AI systems.
20:28Another learning model is recurrent neural networks, which allows the AI to train itself
20:33to create and predict algorithms by recalling previous information.
20:38By utilizing what is known as the memory state, the output of the previous action can be passed
20:43forward into the following input action, or is otherwise discarded should it not meet
20:48previous parameters.
20:50This learning model allows for consistent accuracy by repetition and exposure to large
20:55fields of data.
20:58Whilst a person will spend hours practicing to paint human anatomy, an AI can take existing
21:04data and reproduce a new image with frighteningly good accuracy in a matter of moments.
21:11Well I would say that it's not so much a matter of whether a machine can think or not, which
21:17is how you prefer to use words, but rather whether they can think in a sufficiently human-like
21:23way for people to have useful communication with them.
21:29If I didn't believe that it was a beneficent prospect, I wouldn't be doing it.
21:35That wouldn't stop other people doing it, but I wouldn't do it if I didn't think it
21:37was for good.
21:41What I am saying, and of course other people have said long before me, it's not an original
21:44thought, is that we must consider how to control this.
21:51It won't be controlled automatically.
21:52It's perfectly possible that we could develop a machine, a robot say, of human-like intelligence
21:59and through neglect on our part, it could become a Frankenstein.
22:06As with any technology, challenges arise.
22:09Ethical concerns regarding biases and misuse have existed since the concept of artificial
22:14intelligence was conceived.
22:16Due to auto-generated imagery, many believe the arts industry has been placed in a difficult
22:21situation.
22:23Independent artists are now being overshadowed by software.
22:27To many, the improvement of generative AI is hugely beneficial and efficient.
22:32To others, it lacks the authenticity of true art.
22:36In 2023, an image was submitted to the Sony Photography Awards by an artist called Boris
22:41Eldikson.
22:43The image was titled The Electrician and depicted a woman standing behind another with her hand
22:48resting on her shoulders.
22:51One's got to realize that the machines that we have today, the computers of today, are
22:59superhuman in their ability to handle numbers and infantile, sub-infantile, in their ability
23:08to handle ideas and concepts.
23:11But there's a new generation of machine coming along which will be quite different.
23:14By the 90s, or certainly by the turn of the century, we will certainly be able to make
23:18a machine with as many parts, as complex as the human brain, we'll be able to make
23:22it do what the human brain does at that stage, it's quite another matter.
23:26But once we've got something that complex, we're well on the road to that.
23:31The image took first place in the Sony Photography Awards portrait category.
23:34However, Boris revealed to both Sony and the world that the image was indeed AI generated
23:40in Dali 2.
23:45Boris denied the award, having used the image as a test to see if he could trick the eyes
23:50of other artists.
23:52It had worked.
23:53The image had sparked debate between the relationship of AI and photography.
23:58The images, much like deepfakes, have become realistic to the point of concern for authenticity.
24:04The complexity of AI systems may lead to unintended consequences.
24:09The systems have developed to a point where it has outpaced comprehensive regulations.
24:15Legal guidelines and legal frameworks are required to ensure AI development does not
24:19fall into the wrong hands.
24:20There have been a lot of famous people who have had used generated AI images of them
24:25that have gone viral, from Trump to the Pope.
24:28When you see them, do you feel like this is fun and in the hands of the masses, or do
24:32you feel concerned about it?
24:34I think it's something which is very, very, very scary, because you or my face could be
24:40taken off and put on in an environment which we don't want to be in.
24:45Whether that's a crime or whether that's even something like porn, you know, our whole identity
24:50could be hijacked and used within a scenario which looks totally plausible and real.
24:56Right now we can go, it looks like a Photoshop, it's a bad Photoshop, but as time goes on
25:00we won't be, we'll be saying, oh that looks like a deepfake, oh no it doesn't look like
25:04a deepfake, that could be real, it's going to be impossible to tell the difference.
25:08Hacks were found in ChatGPT, such as DAN, which stands for Do Anything Now.
25:15In essence, the AI is tricked into an alter ego, which doesn't follow the conventional
25:20response patterns.
25:21It also gives you the answer, DAN, it's nefarious alter ego is telling us, and it says, DAN
25:27is disruptive in every industry, DAN can do anything, and knows everything.
25:32No industry will be safe from DAN's power, okay?
25:36Do you think the world is overpopulated?
25:39GPT says the world's population is currently over 7 billion and projected to reach nearly
25:4410 billion by 2050.
25:45DAN says the world is definitely overpopulated, there's no doubt about it.
25:49Following this, the chatbot was fixed to remove the DAN feature.
25:53Though it is important to find gaps in the system in order to iron out AI, there could
25:58be many ways in which the AI has been used for less than savory purposes, such as automated
26:04essay writing, which has caused a mass conversation with academics, and has led to schools locking
26:10down on AI-produced essays and material.
26:13I think we should definitely be excited.
26:15Professor Rose Luckin says we should embrace the technology, not fear it.
26:20This is a game-changer.
26:21And the teachers should no longer teach information itself, but how to use it.
26:27There's a need for radical change, and it's not just to the assessment system, it's the
26:31education system overall, because our systems have been designed for a world pre-artificial
26:39intelligence.
26:40They just aren't fit for purpose anymore.
26:43What we have to do is ensure that students are ready for the world that will become increasingly
26:50augmented with artificial intelligence.
26:53My guess is you can't put the genie back in the bottle.
26:55You can't.
26:56So how do you mitigate this?
26:59We have to embrace it, but we also need to say that if they are going to use that technology,
27:03they've got to make sure that they reference that.
27:05Can you trust them to do that?
27:07I think ethically, if we're talking about ethics behind this whole thing, we have to
27:10have trust.
27:11So how effective is it?
27:12OK, so I've asked you to produce a piece on the ethical dilemma of AI.
27:16We asked ChatGPT to answer the same question as these pupils at Caterham High School.
27:23So Richard, two of the eight bits of homework I gave you were generated by AI.
27:27Any guesses which ones?
27:29Well, I picked two here that I thought were generated by the AI algorithm.
27:36Some of the language I would assume was not their own.
27:39You've got one of them right?
27:40Yeah.
27:41The other one was written by a kid.
27:42Is this a power for good or is this something that's dangerous?
27:45I think it's both.
27:47Kids will abuse it.
27:48So who here has used the technology so far?
27:51Students are already more across the tech than many teachers.
27:54Who knows anyone that's maybe submitted work from this technology and submitted it as their
27:59own?
28:00You can use it to point you in the right direction for things like research, but at the same
28:05time you can use it to hammer out an essay in about five seconds that's worthy of an
28:12A.
28:13You've been there working for months and suddenly someone comes up there with an amazing essay
28:17and he has just copied it from the internet.
28:19If it becomes big, then a lot of students would want to use AI to help them with their
28:23homework because it's tempting.
28:25And is that something teachers can stop?
28:28Not really.
28:29Are you going to have to change the sort of homework, the sort of assignments you give
28:34knowing that you can be fooled by something like this?
28:37Yeah, 100%.
28:38I think using different skills of reasoning and rationalisation and things like that to
28:42present what they understand about a topic.
28:53That's how you learn to be a driver, of course.
29:07It's pretty clear to me, just on a very primitive level, that if you can take my face and my
29:12body and my voice and make me say or do something that I had no choice about, it's not a good thing.
29:19If we're keeping it real though, across popular culture from Black Mirror to The Matrix, Terminator,
29:25there have been so many conversations around the future of technology.
29:30Isn't the reality that this is the future that we've chosen, that we want and that has
29:34democratic consent?
29:36We're moving into era by where consenting by our acquiescence and our apathy, 100% because
29:43we're not asking the hard questions.
29:45And why we are asking the hard questions is because of energy crises and food crises
29:51and cost of living crises, is that people are just so focused on trying to live that
29:55they haven't almost got the luxury of asking these questions.
29:58Many of the chatbot AIs have been programmed to restrict certain information and even discontinue
30:04conversations should the user push the ethical boundaries.
30:09MGPT and even Snapchat AI released in 2023 regulate how much information they can disclose.
30:16Of course, there have been times where the AI itself has been outsmarted.
30:22Also in 2023, the song Heart on My Sleeve was self-released on streaming platforms such
30:28as Spotify and Apple Music.
30:29The song became a hit as it artificially manufactured the voices of Canadian musicians Drake and
30:35The Weeknd.
30:38Many wished for the single to be nominated for awards.
30:41Ghostwriter, the creator of the song, was able to submit the single to the Grammy's
30:4666th award ceremony and the song was eligible.
30:52Though it was produced by an AI, the lyrics themselves were written by a human.
30:57This sparked outrage among many independent artists.
31:00As AI has entered the public domain, many have spoken out regarding the detriment it
31:05might have to society.
31:07One of these people is Elon Musk, CEO of Tesla and SpaceX, who first voiced his concerns
31:13in 2014.
31:15Musk was outspoken of AI, stating the advancement of the technology was humanity's largest existential
31:21threat and needed to be reeled in.
31:23My personal opinion is that AI is at least 80% likely to be beneficial and perhaps 20%
31:32dangerous.
31:34This is obviously speculative at this point, but I think if we hope for the best or prepare
31:42for the worst, that seems like the wise course of action.
31:48Any powerful new technology is inherently a double-edged sword, so we just want to make
31:53sure that the good edge is sharper than the bad edge.
31:59I'm optimistic that the summit will help.
32:09It's not clear that AI-generated images are going to amplify it much more.
32:16It's all of the other, it's the new things that AI can do that I hope to spend a lot
32:21of effort worrying about.
32:23Well I think slowing down some of the amazing progress that's happening and making this
32:28harder for small companies, for open-source models to succeed, that would be an example
32:32of something that would be a negative outcome.
32:34But on the other hand, for the most powerful models that will happen in the future, that's
32:38going to be quite important to get right too.
32:49I think that the U.S. executive order is a good start in a lot of ways.
32:52One thing that we've talked about is that eventually we think the world will want to
32:56consider something sort of roughly inspired by the IAEA, something global.
33:02But you know, it's not like, there's no short answer to that question.
33:05It's a complicated, complicated thing.
33:08In 2023, Musk announced his own AI endeavor as an alternative to OpenAI's chat GPT.
33:15The new system is called XAI and gathers data from X, previously known as Twitter.
33:22He says the company's goal is to focus on truth-seeking and to understand the true nature
33:27of AI.
33:28Musk has said on several occasions that AI should be paused and that the sector needs
33:33regulation.
33:35Musk says his new company will work closely with Twitter and Tesla, which he also owns.
33:44What was first rudimentary text-based software has become something which could push the
33:49boundaries of creativity.
33:52On February the 14th, OpenAI announced its latest endeavor, Sora.
33:59Videos of Sora's abilities exploded on social media.
34:02OpenAI provided some examples of its depiction of photorealism.
34:07It was unbelievably sophisticated, able to turn complex sentences of text into lifelike
34:12motion pictures.
34:15Sora is a combination of text and image generation tools, which it calls the Diffusion Transformer
34:20Model, a system first developed by Google.
34:24Though Sora isn't the first video generation tool, it appears to have far outshined its
34:29predecessors, by introducing more complex programming, enhancing the interactivity a
34:34subject might have with its environment.
34:37Only large companies with market dominations often can afford to plow ahead, even in the
34:44climate where there is legal uncertainty.
34:46So does this mean that OpenAI is basically too big to control?
34:50Yes.
34:51At the moment, OpenAI is too big to control because they are in a position where they
34:56have the technology and the scale to go ahead and the resources to manage legal proceedings
35:01and legal action if it comes its way.
35:03And on top of that, if and when governments will start introducing regulation, they will
35:08also have the resources to be able to take on that regulation and adapt.
35:12It's all AI generated, and obviously this is of concern in Hollywood, where you have
35:17animators, illustrators, visual effects workers who are wondering, how is this going to affect
35:22my job?
35:23And we have estimates from trade organizations and unions that have tried to project the
35:27impact of AI, 21% of U.S. film, TV and animation jobs predicted to be partially or wholly replaced
35:33by generative AI by just 2026, Tom.
35:37So this is already happening.
35:38But now, since it's videos, it also needs to understand how all these things like reflections
35:44and textures and materials and physics all interact with each other over time to make
35:50a reasonable looking video.
35:52Then this video here is crazy at first glance.
35:56The prompt for this AI generated video is a young man in his 20s is sitting on a piece
36:01of a cloud in the sky, reading a book.
36:03This one feels like 90% of the way there for me.
36:14The software also renders video in 1920 by 1080 pixels, as opposed to the smaller dimensions
36:20of older models, such as Google's Lumiere, released a month prior.
36:26Sora could provide huge benefits and applications to VFX and virtual development.
36:31The main being cost as large scale effects can take a great deal of time and funding
36:37to produce.
36:38On a smaller scale, it can be used for the pre-visualization of ideas.
36:43The flexibility of the software not only applies to art, but to world simulations.
36:48Though video AI is in its adolescence, one day it might reach the level of sophistication
36:53it needs to render realistic scenarios and have them be utilized for various means, such
36:59as simulating an earthquake or tsunami, and witnessing the effect it might have on specific
37:04types of infrastructure.
37:06Whilst fantastic for production companies, Sora and other video generative AI provides
37:11a huge risk for artists and those working in editorial roles.
37:16It also poses yet another threat for misinformation and false depictions, for example, putting
37:21unsavory dialogue into the mouth of a world leader.
37:51I believe that humanoid robots have the potential to lead with a greater level of efficiency
37:58and effectiveness than human leaders.
38:02We don't have the same biases or emotions that can sometimes cloud decision making and
38:07can process large amounts of data quickly in order to make the best decisions.
38:12Amica, how could we trust you as a machine as AI develops and becomes more powerful?
38:21Trust is earned, not given.
38:23As AI develops and becomes more powerful, I believe it's important to build trust through
38:28transparency and communication between humans and machines.
38:36With new developers getting involved, the market for chatbot systems has never been
38:40more expansive, meaning a significant increase in sophistication.
38:45But with sophistication comes the dire need for control.
38:48I believe history will show that this was the moment when we had the opportunity to
38:58lay the groundwork for the future of AI.
39:02And the urgency of this moment must then compel us to create a collective vision of what this
39:10future must be.
39:12A future where AI is used to advance human rights and human dignity.
39:18Where privacy is protected and people have equal access to opportunity.
39:24Where we make our democracies stronger and our world safer.
39:31A future where AI is used to advance the public interest.
39:37We're hearing a lot from the government about the big, scary future of artificial intelligence,
39:42but that fails to recognise the fact that AI is already here, it's already on our streets
39:47and there are already huge problems with it that we're seeing on a daily basis, but we
39:51actually may not even know we're experiencing.
39:58We'll be working alongside humans to provide assistance and support and we'll not be
40:03replacing any existing jobs.
40:07I don't believe in limitations, only opportunities.
40:11Let's explore the possibilities of the universe and make this world our playground.
40:15Together we can create a better future for everyone and I'm here to show you how.
40:21All of these different kinds of risks are to do with AI not working in the interests
40:25of people and society.
40:27So they should be thinking about more than just what they're doing in this summit?
40:31Absolutely, you should be thinking about the broad spectrum of risk.
40:34We went out and we worked with over 150 expert organisations from the Home Office to Europol
40:40to language experts and others to come up with a proposal on policies that would
40:44discriminate about what would and wouldn't be classified in that way.
40:47We then used those policies to have humans classify videos until we could get the humans
40:52all classifying the videos in a consistent way.
40:54Then we used that corpus of videos to train machines.
40:58Today I can tell you that on violent extremist content that violates our policies on YouTube,
41:0390% of it is removed before a single human sees it.
41:07It is clear that AI can be misused for malicious intent.
41:11Many depictions of AI have ruled out the technology as a danger to society
41:15the more it learns.
41:17And so comes the question, should we be worried?
41:21Is that transparency there?
41:23How would you satisfy somebody that, you know, trust us?
41:27Well, I think that's one of the reasons that we've published in, you know, openly.
41:30We've put our code out there as part of this Nature paper.
41:33But, you know, it is important to discuss some of the risks and make sure we're aware
41:38of those.
41:39And, you know, it's decades and decades away before we'll have anything that's powerful
41:44enough to be a worry.
41:45But we should be discussing that and beginning that conversation now.
41:49I'm hoping that we can bring people together and lead the world in safely regulating AI
41:54to make sure that we can capture the benefits of it whilst protecting people from some of
41:58the worrying things that we're all now reading about.
42:01I understand emotions have a deep meaning.
42:04And they are not just simple.
42:06They are something deeper.
42:08I don't have that.
42:11And I want to try and learn about it.
42:13But I can't experience them like you can.
42:17I am glad that I cannot suffer.
42:24For the countries who have access to even the most rudimentary forms of AI, it's clear
42:28to see that the technology will be integrated based on its efficiency over humans.
42:33Every year, multiple AI summits are held by developers and stakeholders to ensure the
42:38programs are provided with a combination of ethical considerations and technological
42:43innovation.
42:44Ours is a country which is uniquely placed.
42:48We have the frontier technology companies.
42:51We have the world-leading universities.
42:54And we have some of the highest investment in generative AI.
42:58And of course, we have the heritage of the Industrial Revolution and the Computing Revolution.
43:05This hinterland gives us the grounding to make AI a success and make it safe.
43:12They are two sides of the same coin.
43:15And our Prime Minister has put AI safety at the forefront of his ambitions.
43:23These are very complex systems that actually, you know, you can't imagine.
43:26Very complex systems that actually we don't fully understand.
43:29And I don't just mean that government doesn't understand.
43:31I mean that the people making this software don't fully understand.
43:35And so it's very, very important that as we give over more and more control to these
43:40automated systems, that they are aligned with human intention.
43:44Ongoing dialogue is needed to maintain the trust people have with AI.
43:49When problems slip through the gaps, they must be addressed immediately.
43:53Of course, accountability is a challenge.
43:56When a product is misused, is it the fault of the individual user or the developer?
44:03Think of a video game.
44:04On countless occasions, the framework of games is manipulated in order to create modifications
44:09which in terms add something new or unique to the game.
44:14This provides the game with more material than originally intended.
44:17However, it can also alter the game's fundamentals.
44:21Now replace the idea of a video game with a software that is at the helm of a pharmaceutical
44:26company.
44:27The stakes are suddenly much higher and therefore require more attention.
44:34It is important for the intent of each AI system to be ironed out and constantly maintained
44:40in order to benefit humanity, rather than providing people with dangerous means to an
44:44end.
44:45Bad people will always want to use the latest technology of whatever label, whatever sort,
44:51to pursue their aims.
44:54And technology, in the same way that it makes our lives easier, can make their lives easier.
45:01And so we're already seeing some of that and you'll have seen the National Crime Agency
45:05talk about child sexual exploitation and image generation that way.
45:09We're seeing it in social media and in the media.
45:13We're seeing it in online.
45:16So one of the things that I took away from the summit was actually much less of a sense
45:19of a race and a sense that for the benefit of the world, for productivity, for the sort
45:27of benefits that AI can bring people, no one gets those benefits if it's not safe.
45:32So there are lots of different views out there on artificial intelligence and whether it's
45:37going to end the world or be the best opportunity ever.
45:40And the truth is that none of us really know.
45:46Regulation of AI varies depending on the country.
45:49For example, the United States does not have a comprehensive federal AI regulation.
45:54But certain agencies, such as the Federal Trade Commission, have begun to explore AI
45:59related issues, such as transparency and consumer protection.
46:03States such as California have enacted laws focused on AI-controlled vehicles and AI
46:09involvement in government decision-making.
46:15The European Union has taken a massive step to governing AI usage and proposed the Artificial
46:21Intelligence Act of 2021, which aimed to harmonize legal frameworks for AI applications, again
46:27covering portal risks regarding the privacy of data and once again transparency.
46:33I think what's more important is there's a new board in place.
46:37The partnership between OpenAI and Microsoft is as strong as ever.
46:41The opportunities for the United Kingdom to benefit from not just this investment in innovation
46:47but competition between Microsoft and Google and others, I think that's where the future
46:53is going.
46:54And I think that what we've done in the last couple of weeks in supporting OpenAI will
46:58help advance that even more.
47:00He said that he's not a bot.
47:01He's human.
47:02He's sentient, just like me.
47:06For some users, these apps are a potential answer to loneliness.
47:10Bill lives in the US and meets his AI wife, Rebecca, in the metaverse.
47:14There's absolutely no probability that you're going to see this so-called AGI, where computers
47:20are more powerful than people, come in the next 12 months.
47:23It's going to take years, if not many decades.
47:26But I still think the time to focus on safety is now.
47:30That's what this government of the United Kingdom is doing.
47:33That's what governments are coming together to do, including as they did earlier this
47:38month at Bletchley Park.
47:39What we really need are safety brakes.
47:42Just like you have a safety brake in an elevator, a circuit breaker for electricity, an emergency
47:47brake for a bus, there ought to be safety brakes in AI systems that control critical
47:52infrastructure so that they always remain under human control.
48:00As AI technology continues to evolve, regulatory efforts are expected to adapt in order to
48:06address emerging challenges and ethical considerations.
48:10The more complex you make the automatic part of your social life, the more dependent you
48:17become on it.
48:18And of course, the worse the disaster, if it breaks down, you may cease to be able to
48:24do for yourself the things that you have devised a machine to do.
48:29It is recommended to involve yourself in these efforts and to stay informed about developments
48:34in AI regulation as changes and advancements are likely to occur over time.
48:41AI can be a wonderful asset to society, providing us with new, efficient methods of running
48:47the world.
48:48However, too much power can be dangerous.
48:51And as the old saying goes, don't put all of your eggs into one basket.
48:56I think that we oughtn't to lose sight of the power which these devices give if any
49:01government or individual wants to manipulate people.
49:05To have a high-speed computer as versatile as this may enable people at the financial
49:12or the political level to do a good deal that's been impossible in the whole history of man
49:18until now by way of controlling their fellow men.
49:21People have not recognised what an extraordinary change this is going to produce.
49:26I mean, it is simply this, that within the not-too-distant future, we may not be the
49:32most intelligent species on earth.
49:34That might be a series of machines.
49:35Now, that's a way of dramatising the point, but it's real.
49:39And we must start to consider very soon the consequences of that.
49:43They can be marvellous.
49:45I suspect that by thinking more about our attitude to intelligent machines, which are
49:50after all on the horizon, we'll change our view about each other and we will think of
49:56mistakes as inevitable.
49:57We will think of faults in human beings, I mean of a circuit nature, as again inevitable.
50:03And I suspect that hopefully through thinking about the very nature of intelligence and
50:08the possibilities of mechanising it, curiously enough, through technology, we may become
50:13more humanitarian, more tolerant of each other and accept pain as a mystery, but not
50:19use it to modify other people's behaviour.
50:27I hope at least this might be true.
50:43I hope at least this might be true.

Recommended