• 7 months ago
Is AI ready for use in the sciences? And if not, how can we get there? Stephen Wolfram, Chairman of Wolfram, spoke at Imagination In Action's 'Forging the Future of Business with AI' Summit and speaks about why AI is better with LLMs and how we can use AI usefully in science.

Subscribe to FORBES: https://www.youtube.com/user/Forbes?sub_confirmation=1

Fuel your success with Forbes. Gain unlimited access to premium journalism, including breaking news, groundbreaking in-depth reported stories, daily digests and more. Plus, members get a front-row seat at members-only events with leading thinkers and doers, access to premium video that can help you get ahead, an ad-light experience, early access to select products including NFT drops and more:

https://account.forbes.com/membership/?utm_source=youtube&utm_medium=display&utm_campaign=growth_non-sub_paid_subscribe_ytdescript

Stay Connected
Forbes newsletters: https://newsletters.editorial.forbes.com
Forbes on Facebook: http://fb.com/forbes
Forbes Video on Twitter: http://www.twitter.com/forbes
Forbes Video on Instagram: http://instagram.com/forbes
More From Forbes: http://forbes.com

Forbes covers the intersection of entrepreneurship, wealth, technology, business and lifestyle with a focus on people and success.
Transcript
00:00 So, what's it going to take, for example, for AIs to beat humans in doing science?
00:07 You know, I was interested in this recently because it's like there are all these, what
00:11 is science about?
00:12 What is one trying to do in science?
00:13 A typical thing one's trying to do in science is predict what will happen in systems, for
00:18 example in nature.
00:19 And there's a question of we have ways that we've tried to do that, can AI just look at
00:25 what's been happening in some system and immediately predict what's going to happen next?
00:30 Unfortunately, the answer is kind of no.
00:33 And the issue is there are plenty of systems where there's just a sort of irreducible amount
00:38 of computation that that system is doing and AIs as they're currently built are just doing
00:45 essentially fairly shallow computation.
00:47 And if you say, even if you say to an AI, you know, I've got some sine wave and I'm
00:52 going to give you the first part of the sine wave, predict what comes next.
00:57 A typical, fancy, modern machine learning AI will fail.
01:03 In fact, what it will do is it will draw the sine wave, the part of it that you showed
01:07 it already, the part you trained it on, it'll reproduce that part of the sine wave and then
01:11 its extrapolation going forward will just be based on what the activation functions
01:16 were inside the neural net.
01:17 It will be, you know, if it has linear activation functions, it'll say the sine wave must continue
01:22 linearly so to speak.
01:23 It does surprisingly badly.
01:25 And it's, I think the thing to understand is the places where kind of modern LLMs, AIs
01:31 and so on are doing well are places where in a sense there's shallower computation than
01:36 we knew was involved that has to be done.
01:39 So for example, in language, I think the big thing we learned from the success of chatGBT
01:44 is that in a sense language is simpler than we expected.
01:47 There's more regularity in language than we ever identified.
01:50 And that's why the AI is able to be successful there.
01:53 In these problems in science where there is something fundamentally sort of computationally
01:57 irreducible going on, the AI doesn't really have more of a chance than we humans do.
02:04 And you know, it's not able to make more progress.
02:07 I think this question of whether, I mean, you know, in a sense there are things where
02:11 if you say, give me some vague impression of what the literature of this subject has
02:17 to say about this or that thing.
02:19 That's a place where LLMs can do quite well.
02:21 I mean, in the past we've had kind of statistics which can say, you know, given this whole
02:25 big pile of numbers, tell me what the outliers in this big pile of numbers are.
02:29 Tell me what the mean of these numbers is, the variance of these numbers.
02:33 So now with LLMs we can do the same kind of thing for large amounts of text.
02:38 We've never had that capability really before to do some sort of, you know, it isn't statistics,
02:42 it's something new, it doesn't really have a name yet.
02:44 It's kind of a way of analyzing large amounts of text.
02:48 And that's something where I think we can expect that AIs will be useful.
02:52 I mean, there's much more to say about this whole question.
02:56 People say, for example, oh, is the AI able to do something original and creative?
03:02 What's actually very trivial to do something original and creative, you just pick a bunch
03:06 of random numbers.
03:08 That sequence of random numbers is very, you know, unexpected, creative, original, whatever
03:13 else.
03:14 The problem is that the typical sequence of random numbers we might pick, nobody really
03:17 cares about those.
03:19 What we're interested in is the original stuff that somehow we think is interesting.
03:25 So for example, if you imagine, you know, you're making pieces of art, a random array
03:29 of pixel values is certainly original, just not terribly interesting.
03:34 And then the question is, you know, if you, for example, you look at some generative AI
03:38 system and you say, look at sort of different possible, you know, values of the embedding
03:45 vectors in the latent space of the thing, you're saying, you know, there's any picture
03:50 you tell it to make, picture of a cat in a party hat or something, that that picture
03:55 is the meaning of cat in a party hat is converted into an array of numbers.
04:01 That array of numbers is the thing from which the picture of the cat is generated.
04:05 If you just change those numbers and you say what's out there in that space of possible
04:10 numbers that represent meaning of things, you find all kinds of pictures, most of which
04:15 are not interpretable by us.
04:17 Most of them are kind of interesting looking, but we say I don't know what that is.
04:22 You know, maybe in the future somebody will say there's a whole style of art that's based
04:26 on this particular direction and embedding space and we get, you know, and everybody
04:30 gets excited about that and then it has a name, we talk about it, it becomes a thing,
04:35 we build on that and so on.
04:37 So it's, you know, this idea that people will say, well, the AI is going to sort of find
04:42 something new that was unexpected, it's rather easy to do that.
04:48 People have tended not to do that.
04:49 I mean, you know, when we write programs, for example, it's like we imagine what the
04:54 program is going to do, we construct a program step by step to do what we want the program
04:57 to do.
04:58 There's a question, if you just wrote programs at random, what would they typically do?
05:04 I happen to have spent lots of time studying in science.
05:06 The answer is even incredibly simple programs, tiny programs, do really complicated things.
05:13 Sometimes those things are really interesting and when we look at them we say that's useful,
05:17 I can use that program for such and such a thing.
05:19 I mean, it's kind of like in the natural world, we can kind of go and mine the natural world
05:23 and we find, you know, magnets and liquid crystals, things like that, and we realize
05:27 those things are useful for something we want to do technologically.
05:30 It's the same kind of thing in this kind of computational world.
05:33 So what you described for science is that in some sense it's like data compression.
05:37 You observe all the dynamics of the universe and you find pockets that you can replace
05:42 by a few laws, by a few differential equations, and then you understand what's going on or
05:46 the parts that you care about.
05:48 But there is another aspect that goes beyond data compression, both in art and science,
05:52 and it's basically this journey that it changes you.
05:54 When you are creative, the creator gets changed.
05:57 The point of art is that you perceive something and it changes you.
06:00 It's also true in business.
06:01 When you're creative in business, the point is that it changes your business if you are
06:04 creative.
06:05 It doesn't just matter that you do something new or come up with new ideas.
06:09 It comes what is important that the observer himself or the creator herself is changing
06:14 by this process of creation.
06:16 And it's maybe assumptions where we see a limitation in some of the present models that
06:21 they are, while they are producing, they are not learning.
06:23 They're not updating based on what they discover.
06:26 It's you who has to do this.
06:27 Right.
06:28 And that's the thing to realize is kind of in, you know, there's a large space of conceivable
06:33 ideas.
06:34 We humans have, you know, maybe 50,000 words in typical languages.
06:38 We have picked certain particular concepts that we, at the present time in history, think
06:44 are important enough to give words to.
06:46 And that's something where in this question of sort of how we explore, how we extend ourselves
06:53 and go out and find other things that we decide are interesting, give words to, and develop
06:59 from.
07:00 I mean, that's, you know, in a sense, one of the issues is the sort of world of what
07:03 is computationally possible is very big.
07:06 And sort of an AI left to its own devices, we could expect to just go out and explore
07:10 this world of what's computationally possible.
07:13 The set of what's computationally possible that we care about is really tiny, actually.
07:17 But it's the thing that is important to us.
07:20 And in kind of the future of our civilization or whatever, I think the story really can
07:24 be thought of as how do we explore this kind of computational space of possibilities?
07:29 You know, when people say, what do the humans do versus the AIs?
07:34 One of the things the humans do is pick which directions they find interesting to go in
07:39 in this kind of computational universe of possibilities.
07:41 I think it's the -- that's -- so -- yeah.
07:49 In -- well, you were raising the question of kind of what is, you know, the role of
07:56 things like science.
07:57 And I think one of the points is in the sort of computational universe of possibilities,
08:01 there are certain pieces for which we can come up with human narratives, so to speak.
08:06 Much of what goes on is not understandable.
08:09 We don't have a way to kind of reduce it to something which we can describe.
08:13 Yeah.
08:14 But you just build tools for this, right?
08:15 You spend most of your life developing tools to explore the space of possibilities with
08:21 computational methods.
08:22 Yes.
08:23 Right?
08:24 And so from this perspective, can we use those tools, for instance, to understand the present
08:27 AI systems that we are building and how to apply them?
08:29 Yeah.
08:30 So, I mean, what I do for a living is build this computational language we call Wolfram
08:35 Language these days.
08:37 And kind of the idea is to have a way to computationally formalize things in the world.
08:43 So I think, you know, a big picture of kind of history, I suppose, is, you know, at the
08:48 -- originally, sort of our -- sort of a couple hundred thousand years ago, probably before
08:53 language was invented, people were just sort of pointing at different rocks and didn't
08:57 have a general description of the concept of a rock.
09:01 Then human language got invented, and we now have this kind of formalized way of saying
09:05 that's a rock as opposed to a chicken or something else.
09:09 And the -- that sort of one stage of formalizing things.
09:12 Then we got things like logic, which is a way of kind of formalizing arguments.
09:16 Then we got mathematics, which is another kind of way of formalizing the world.
09:21 And in the last hundred years, basically, we've had the idea of computation, the idea
09:26 of formalizing things by kind of specifying rules for how things work and seeing how those
09:31 rules -- what the consequence of those rules are.
09:34 And the -- what I've been interested in doing is taking kind of what exists in the world,
09:38 whether it's cities or chemicals or movies or images or whatever else, and representing
09:43 all those things computationally and building a language that can describe what one wants
09:48 to do with those things computationally.
09:50 So it's kind of like in mathematics, about 500 years ago, people kind of invented mathematical
09:55 notation, plus signs and equal signs and things like that.
09:57 And that was kind of important, because that's what led to the development of algebra and
10:01 calculus and basically modern mathematical science and engineering and so on.
10:06 And so my effort in the last 40-something years has been to try and make kind of a computational
10:12 notation for representing things computationally that is a sort of streamlined way to represent
10:19 the world computationally, so that for any sort of field X, one is providing kind of
10:24 the tools, the notation, to build a kind of computational X.
10:29 And that's a -- you know, it turns out to be fairly important.
10:33 It's allowed people to discover and invent lots of kinds of things.
10:36 It's something that there are lots of businesses now that kind of run on our computational
10:41 language where kind of the idea is to describe what the business is doing in computational
10:46 terms.
10:47 And that -- the goal is to have the sort of lumps of computational work that are described
10:51 by the language be as high level as possible.
10:54 You know, when people say it's really remarkable, "ChatGPT can write all this code for me,"
10:58 in a sense what that's saying is that most of that code was boilerplate.
11:03 Most of that code was something where, you know, for a language designer like me, if
11:07 ChatGPT can trot out the boilerplate correctly, that thing should have just been a function
11:12 in the language.
11:13 It shouldn't have been a thing that you have to explicitly write out every time.
11:17 And kind of my sort of goal has been to identify, you know, what are the lumps of computational
11:23 work that we humans care about that we can sort of package together and it turns out
11:27 in the language nowadays we have about 7,000 sort of built-in primitive functions which
11:33 are kind of -- they probably do a little bit more work than the typical words in a natural
11:38 language.
11:39 And the thing that's been interesting now is, you know, we used to primarily have human
11:44 users of our products.
11:46 Now we also have lots of AI users of our products because the -- you know, an LLM is a pretty
11:52 good linguistic interface.
11:53 It's pretty good at going from kind of our vague thinking and conversation and so on.
12:01 But it, like us humans, needs to use tools to actually compute things.
12:06 Just like if you ask a human, you know, take this kind of computational rule and see what
12:10 its consequences are, a human will not be able to do that.
12:13 Would it be helpful for an LLM if it would use Mathematica from language?
12:17 Sure.
12:18 I mean, that's been a thing.
12:19 I mean, you know, we made a thing with OpenAI, what was it, more than a year ago now, just
12:24 a way of kind of having, you know, chat GPT call Wolfram Language or call Wolfram Alpha.
12:31 Our Wolfram Alpha system we built 15 years ago now is a system that takes natural language
12:37 and converts kind of small fragments of natural language into computational language and then
12:42 computes things.
12:43 And one of the features of Wolfram Alpha is when it thinks it understands, it really has
12:48 nailed it.
12:49 Sometimes it will often say, sometimes say it doesn't understand, but when it has successfully
12:54 converted the natural language to computational language, it will be a correct and meaningful
13:00 thing.
13:01 One thing about it that's great about it is it doesn't hallucinate.
13:03 And when you're using it, you're also not hallucinating.
13:05 It helps you to hallucinate less because you can now test your ideas.
13:08 And that's a big issue with the LLMs because they're trained on human-generated text and
13:13 reproduce something that's very similar to this.
13:16 And a lot of that text is basically a hallucination of a possible reality, but you don't know
13:20 whether it's the one that you're in or whether it's logically sound.
13:23 So is it possible for somebody who's currently exploring how to use LLMs to augment their
13:30 work with Wolfram Language?
13:32 And how would they get to start doing this?
13:36 What would be the starting point?
13:37 So I mean, you know, the real thing we've tried to do is to make a computational language
13:42 that allows people to fluently think computationally.
13:47 You know, the thing to understand, most software engineering consists of mostly doing what
13:52 one can think of as kind of very upscale manual labor type work of writing lines of code.
13:59 It's rare in that, you know, you spend two weeks writing lines of code and then you spend
14:04 a short amount of time saying, "How am I going to do the next thing?
14:07 How am I going to kind of think about what to do in computational terms?"
14:10 The thing that we've done for the last, I don't know, 37 years now with Wolfram Language
14:15 is provide something which is sort of as automated as possible.
14:19 So you're kind of concentrating the effort into thinking about what you're trying to
14:23 do computationally, and it's our job to kind of automate as much of the actual doing of
14:28 that as possible.
14:29 So it's a different kind of mode of work.
14:31 It's something, you know, our products have been used particularly by scientists and researchers
14:36 and so on who are people who know what they want to do and just want the tools to be able
14:41 to do that as efficiently as possible.
14:43 It's a slightly different calculation from typical sort of software engineering where
14:48 a lot of the work is the actual, you know, building of lines of code.
14:52 So it's really, you end up having to concentrate on really thinking about how do you think
14:56 about what you want to do computationally.
14:58 And I think that's a skill people should learn.
15:01 It's not really computer science.
15:03 It's different from computer science, but that's an important skill.
15:07 It's something for which the LLMs, the AIs can help because they are a useful interface
15:12 between our kind of sort of vague way of thinking about things and, you know, a typical use
15:19 case is you talk to the LLM, it tries to write Wolfram Language code, you look at that code,
15:25 and the whole point of that code is it's a notation for computation that's intended for
15:30 humans to read as well as to write.
15:33 So you look at what the LLM produced.
15:35 You say, is that actually what I meant?
15:37 You know, this is a precise representation of this.
15:40 Is it what I meant?
15:41 If the answer is yes, you say, okay, great.
15:44 You can then go use that as a kind of solid brick that you can then build a whole system
15:48 on top of.
15:49 It's not something where -- so you're kind of -- the typical workflow that is sort of
15:55 emerging is you use the LLM as a way to kind of get an idea of how you should represent
16:02 what you're talking about computationally.
16:05 You then get this kind of solid brick of computation that you can then build with from there.
16:11 And it's -- you know, the LLM will often -- it will produce a piece of code.
16:14 It will run it.
16:16 We have all kinds of telemetry that the code generates as it runs.
16:19 The LLM will look at that telemetry, which will be quite boring to a human, but it will
16:23 say, oh, whoops, it didn't do what I expected, and the LLM will go and say, let me change
16:27 the code and so on.
16:29 It's a funny situation, because I view the LLM as a little bit like a wild animal.
16:33 I mean, what we've built is more like a machine.
16:36 The LLM is more like a wild animal.
16:37 And the question is, can you put the wild animal in the right -- can you train it, can
16:41 you domesticate it in the right way to have it do something really useful to you, and
16:45 when it snarls at you, have that not be a huge problem?
16:48 So can we?
16:49 Can we train the LLM to become a computer algebra system, for instance?
16:54 Can we get it to make proofs?
16:55 Or do you think that this family of systems is in principle incapable, and we need to
17:00 find something new?
17:01 I don't think that the -- I mean, the current model -- you know, current LLMs are pretty
17:06 much feed-forward networks.
17:07 I mean, they -- you know, you've got a bunch of text, and the goal is, what's the next
17:12 token supposed to be?
17:13 And you kind of ripple through the neural net, and it says, well, these are the probabilities
17:16 for the next token.
17:17 You put down the next token, then it goes and takes that whole sequence, and it loops
17:22 around.
17:23 So the only kind of feedback mechanism is this outer loop, which is -- and in that outer
17:28 loop, you can do all kinds of good things.
17:29 Like you can have the LLM notice -- you can have that outer loop notice that the LLM now
17:34 wants to call a tool, and it goes off and generates something which is actually input
17:38 for our computational language, for example, and goes to use that.
17:43 But it's a very -- it's not -- it's kind of a weak form of computation, so to speak.
17:48 I think -- I've done a bunch of experiments trying to get -- trying to guide doing proofs
17:53 using LLMs.
17:54 They were a total failure.
17:57 Maybe other people can make it work.
17:58 I am skeptical about that.
18:00 I mean, just to understand, when you do something like a mathematical proof, there are many
18:04 different kinds of problems that are similar.
18:06 Mathematical proofs, chemical synthesis, generally pathfinding, where you have some system, and
18:11 you say, "There are these steps you can take.
18:13 How would I put these steps together to get to a particular objective?"
18:17 And that's something that it just doesn't seem like it's the kind of thing that -- I
18:23 mean, there are probably some -- okay, doing integrals, for example, in mathematics.
18:29 There is -- the details of how to do that, LLMs I don't think are very helpful with.
18:34 Things like, "Tell me roughly what kind of a function is going to show up in this integral."
18:38 That's a very human kind of thing, and that's something where LLMs can be somewhat useful,
18:42 I think.
18:43 LLMs are very good at making homework, right?
18:45 And the better they get, the more advanced the degree of the homework becomes that they
18:50 are able to emulate.
18:52 But of course, a lot of this homework is in the training data, and a lot of it is designed
18:56 to be solvable.
18:57 So, at the edge of human knowledge, it's much, much harder to deploy them, right?
19:01 Well, I think, as a matter of fact, for homework, you know, for doing math, the LLM left to
19:06 its own devices, "We'll just get the math wrong."
19:08 Yeah.
19:09 Period.
19:10 End of story.
19:11 It's just not built for that.
19:12 But they got a lot better at this.
19:13 I think that's not a win-win.
19:14 So, for instance, if you ask JCPT to solve a math problem, it will usually write a small
19:18 Python program and then execute this, and it made it better.
19:21 The one-shot solution, of course, just piping something 200 layers of an LLM is not giving
19:26 you the solution because it's impossible to get this function in there to make an entire
19:30 proof, right?
19:31 But you can get this thing to step by step get there.
19:35 And of course, open AI is working very much on this long-form reasoning, and it's an explicit
19:39 goal.
19:40 But what I always found a little bit curious about your project over all those years is
19:45 that you always conceptualized Wolfram Alpha as an extension of the human mind.
19:48 It's a tool that allows us to do certain things that our mind is not really good at, like
19:53 also making extremely precise long-form proof, keeping lots of things integrated, and so
19:57 on.
19:58 Whereas open AI is focused on building something that is maybe surpassing the human capability
20:03 in all dimensions, including making intentional proof.
20:08 Do you think that this project of open AI is futile?
20:11 We cannot build something that's a general intelligence that is replacing humans, and
20:15 we will always end up with systems that extend us?
20:18 Or is it more something that we should be building systems that extend that, because
20:21 that's actually much more useful to us?
20:23 Well, I mean, there is a lot that can be done computationally, so to speak.
20:29 As I was saying, the problem is most of it is not aligned with what we care about.
20:33 I mean, I can, you know, like yesterday, I was working on a project.
20:37 I've been interested recently in a sort of fundamental project about why machine learning
20:41 works.
20:42 It's not clear why machine learning works.
20:44 It's not clear why it should be possible to get these systems to be trainable.
20:49 It's not clear why they should be able to extrapolate the ways they can extrapolate,
20:53 and so on.
20:54 So, you know, I was looking at a bunch of simple programs and what they do, and these
20:59 things are things that I don't understand very well.
21:02 They immediately, it's very easy to do things which are far beyond what human minds can
21:07 grasp.
21:08 That's easy.
21:09 The question is, what, you know, how do you relate that to what human minds actually want
21:14 to do?
21:15 Well, simple.
21:16 You prompt the LLM with, "What would Stephen Wolfram want?"
21:20 You know, I'm in the bad situation of having done enough live streaming and other kinds
21:26 of things that there's like 50 million words of me talking about stuff that's out there
21:31 on the web and so on.
21:33 And so, you know, somebody's even trained a little bot of me, which I didn't find that
21:38 exciting for me.
21:39 I mean, I was kind of, but, you know, I think that's a, I mean, this whole question about
21:46 what does a human want, this is, you know, we've been working quite a bit on AI tutoring
21:52 type technology, which is something where everybody who sees an LLM says, "Oh, we're
21:56 going to be able to make a tutoring system out of this.
21:58 It's going to be really easy."
22:00 The number one observation is it's not easy.
22:03 The obvious things that you do, they work for a five minute demo and then they fail
22:07 horribly.
22:08 Yeah.
22:09 So, you know, to make it actually work is non-trivial.
22:13 And the main thing we're discovering is that it's sort of a mixture of software engineering,
22:18 sort of symbolic computation and so on, together with some cleverness about how to cage the
22:23 wild animal appropriately.
22:25 That's what seems to be making progress.
22:27 But what's really happening there is you want the LLM to have kind of a model for the human
22:30 student.
22:31 You know, ultimately what I would like an LLM to be able to do is to know everything
22:35 that I know.
22:36 And if I'm trying to learn about some new thing, it can know immediately, this is the
22:40 one fact that I should tell you, because that will unlock this whole path of understanding
22:45 that, you know, that we can decode what's needed for that based on knowing what I already
22:51 know.
22:52 Yeah.
22:53 So, I think there's a question of whether, you know, what does it mean to make a beyond
22:56 human intelligence?
22:58 What would such a thing be able to do?
23:02 And to what extent is, I mean, we have, you know, in the early years of AI, people said,
23:07 oh, as soon as you can have a computer that does mathematical computation, we know you'll
23:12 have a computer that's kind of doing AI.
23:15 Well, you know, we have that, but it's something that is a very non-human kind of intelligence.
23:23 All right.
23:24 Thanks.
23:25 All right.
23:26 Well done.
23:27 Thank you, Yosef.
23:27 Thank you.
23:28 Thank you.
23:28 Thank you.
23:29 Thank you.
23:29 Thank you.
23:30 Thank you.
23:30 Thank you.
23:35 Thank you.
23:40 Thank you.
23:45 [BLANK_AUDIO]

Recommended