Category
😹
FunTranscript
00:00In 1997, IBM's Deep Blue defeated the world champion, Garry Kasparov, at chess.
00:07In 2016, DeepMind's AlphaGo beat Lee Sedol, one of the world's top Go players.
00:15In the future, it will be much more than just board games.
00:19So what do you predict for tomorrow's AI?
00:24It was actually during ancient times that we first created myths surrounding artificial
00:28intelligence.
00:30Greek legends were the earliest, such as the story of Talos, a bronze automaton gifted
00:35with intelligence tasked to defend Crete.
00:38For most of our history, though, machines that could think have been far from the realms
00:41of reality.
00:42It wasn't until around the middle of the 20th century that serious research into AI
00:47began.
00:48Alan Turing was the earliest pioneer of the field, publishing Computing Machinery and
00:53Intelligence in 1950.
00:55In it, he coined the Turing Test, which is a way to determine if a computer can indeed
01:00think intelligently.
01:01It's been over 70 years since this paper, and now a variety of different AI is commonly
01:06used in daily life.
01:08Language translation, facial recognition, online advertising, search engines… there's
01:13an ever-growing list.
01:16Slowly it's starting to feel like there isn't a field we haven't applied it to.
01:19And clearly, we have quickly found a lot of benefits to AI.
01:23For example, they can easily handle large quantities of data, are free from human error,
01:28and of course, they can work 24-7.
01:30Generally, it seems that technology is improving society, and should continue to do so.
01:36AIs are a route to ultimate efficiency.
01:38However, many believe that we should be extremely cautious.
01:42Renowned physicist Professor Stephen Hawking once said, quote, the development of full
01:47artificial intelligence could spell the end of the human race, end quote.
01:51He followed this up by saying that AI would far exceed humans, since our advancements
01:56are limited by biological evolution.
01:59The late scientist thought we should be extremely careful.
02:02How justified, exactly, were his fears?
02:05Where AI is in its current state, already people worry it can be used maliciously.
02:10We're currently in a worldwide race for legal autonomous weapons, or LAWS.
02:15In 2024, the US Department of Defense promised $1 billion for the Replicator program, which
02:22aims to field thousands of autonomous war drones.
02:25Shortly, it's expected that drones will be able to use widespread facial recognition
02:30to target and attack specific individuals.
02:33Overall, warfare is one of the simplest fields we can apply AI to.
02:37And as a result, the United Nations has been debating a worldwide ban on autonomous weapons.
02:43Unfortunately, quite a few countries oppose such a ban.
02:46So, governments worldwide are weaponising artificial intelligence… but can we be sure
02:51that those AI weapons will stay loyal to their creators?
02:55While the US, for example, is against banning them, they still want to ensure humans are
03:00ultimately controlling them.
03:02The concept of emergent behaviour, however, implies we might not be able to control them
03:07forever.
03:08In the case of LAWS, it's when, in a near-future time, they're connected in such a way that
03:12they can easily communicate with each other, independent of human behaviour.
03:17Some military minds have pitched the idea for teams of hundreds of LAWS, for instance,
03:22connected in a widespread hive mind.
03:24No humans necessary.
03:27Communication between them would expand, but to where is almost impossible to predict.
03:32At the least, we might expect whole new tactics arriving via emergent behaviour, turning weaponised
03:37AI into a force of its own to be reckoned with.
03:40The Pentagon is reportedly making a big push to develop what some have labelled Slaughterbot
03:45Swarms… all of which means that, like it or not, they are likely to become a reality.
03:50Again, we have no idea what a heavily-armed and highly-connected flock of AI intelligence
03:56will do.
03:57Hopefully, fail-safes will be put in place to prevent serious issues… however, some
04:02worry that any attempted block or limiter will eventually be overridden.
04:06Meanwhile, the US Air Force is also working on something called Project Venom.
04:11This aims to develop powerful F-16 fighters which are capable of flying themselves.
04:16Currently, about $50 million has been invested… and on the bright side, AI jets will certainly
04:21reduce the need to risk human pilots.
04:24If we can rely on AI's loyalty, and if we have the proper fail-safes in place, then
04:29they should only ever be a danger to enemy targets.
04:32However, once again, this is entirely new ground.
04:36Can Venom really be realised exactly as its developers want it?
04:40Won't there always be a risk of it turning against its maker?
04:43Or of it misinterpreting or refusing mission orders?
04:47Or just going on a rampage?
04:49These are the sorts of huge questions that dog any plans to push forward.
04:53In general, as we haven't yet completely solved AI even in a non-military context,
04:59many believe it's just too far too soon to try weaponising it.
05:02One positive note is that most major nations agree AI should never be given access to nuclear
05:08weapons.
05:09Although, alarmingly, not everyone is quite in agreement here, either.
05:13But of course, AI isn't only about lethal autonomous weapons.
05:17Yes, they could prove our doom… but what about everything else in the AI bracket?
05:21Currently, AI still isn't truly sentient.
05:25We've likely all used some type of virtual assistant, such as Apple's Siri… but these
05:29are not intelligent enough to overthrow humanity, no matter how spooky they can sometimes seem.
05:35Truly sentient AI is predicted in the very near future, though… so should we be worried
05:39about that?
05:40In even a non-military setting?
05:43In 2017, news broke that Facebook had developed two chatbots tasked to converse with each
05:48other over a fictional trade negotiation.
05:51Machine learning was used to create them, and the chat was monitored.
05:55Scarily, the two bots quickly deviated from any predicted script.
06:00They developed their own language, and started conversing in this instead.
06:04It was at this point that Facebook shut the study down.
06:07Other AIs have done similar things, with Google's translation AI also creating artificial languages
06:13before now.
06:14Broadly, it's thought that the seeming nonsense to us acts as an intermediary language, an
06:19unreadable link to the machines.
06:22While it's true that neither of these examples is particularly dangerous, given their lack
06:26of power, both cases do highlight how an AI world could lead in all new and unknown directions.
06:33The Facebook and Google stories might easily be explained away as glitches right now…
06:37but what happens when there are more than one-off peculiarities?
06:41Are these small moments a sign of more significant things to come?
06:45Creative AI, the most readily available form at present, can generate images, videos, text,
06:51and solve equations.
06:52And the tech can already do a lot of damage… for example, by replacing certain jobs.
06:58Such as in China, where reports claim that about 70% of video game illustrators have
07:03been axed, partly due to growing reliance on AI.
07:07Gamers have critiqued the AI products, saying they lack human creativity… but there's
07:11little sign of the trend stopping.
07:13Perhaps the most infamous contemporary issue of all, however, is the ongoing appearance
07:17of deepfakes.
07:19These involve using someone's likeness to create an exact copy of that person… which
07:24rapidly leads to false images and video involving them, made without their consent.
07:29Such AI was the trigger for the Hollywood Strikes of 2023… but these tools could ultimately
07:34impact far more than simply those in the media.
07:37How far can these new realities take us?
07:40On the one hand, as current, generative AIs are usually trained using human data, it's
07:46at least thought and hoped that they couldn't yet learn to do things that we can't.
07:51Generative results draw on what's been done prior.
07:53In other words, human knowledge will likely be the limiting factor.
07:56And so, the general consensus is that AI in this state can't directly turn against us.
08:03Replace us?
08:04Maybe.
08:05Dislike us?
08:06Possibly.
08:07But break away from us?
08:08Probably not.
08:09There is a darker extension, though.
08:11True AI, also known as Artificial Superintelligence, is a different ballgame.
08:17Right now, it remains in the realm of science fiction.
08:20But were fiction ever to become fact, then it's proposed that this more advanced AI
08:25will learn so well that it will exceed human understanding.
08:29Humans will no longer be the most intelligent lifeforms on Earth, and we'll all be painfully
08:34aware of our demise.
08:35Today, many may consider it speculative territory, but others say that it's vitally important
08:40for us to speculate in order to heat it off.
08:44According to Stephen Hawking, for one, there are no physical laws preventing AI from one
08:48day – perhaps one day soon – operating better than the human brain can.
08:53At which point, it could turn against us, just as easily as it could do any number of
08:58other things.
08:59Will supreme AI crave power?
09:02Will it recognise life on Earth as valuable?
09:05Or see it as a threat?
09:06Or merely an annoyance?
09:08Will it one day scrub back through all the media of now – the books and films and YouTube
09:13videos – with admiration, relish, or disdain?
09:17For all the concerns, generative AI should remain beneath us, but is generative only
09:23the first generation?
09:24And will what's coming next care two hoots about us that came before?
09:29What do you think?
09:30Is there anything we missed?
09:31Let us know in the comments.
09:33Check out these other clips from Unveiled, and make sure you subscribe and ring the bell
09:37for our latest content.