• 5 months ago
Prompt Engineer Michael Taylor joins WIRED to answers your questions from Twitter about artificial intelligence prompts. What is a prompt engineer and why do companies employ them? What are some general tips to improve the AI prompts we use? Why do AI-generated hands have the wrong number of fingers so often? What is AI “hallucinating?” How long will ChatGPT remember the context of your conversations? These questions and plenty more are answered on Prompt Engineer Support.

Category

🤖
Tech
Transcript
00:00I'm Prompt Engineer Michael Taylor. This is Prompt Engineering Support.
00:08Atmarmatzees wants to know, serious question, what is a Prompt Engineer slash Prompt Engineering?
00:14One of the main things I'm doing every day as a Prompt Engineer is A-B testing lots of different variations of prompts.
00:20So I might try asking the AI one thing and then ask it a completely different way and see which one works best.
00:26So a Prompt Engineer might be employed by a company in order to optimize the prompts that they're using in their AI applications.
00:32At Adam Jones Inc. is asking, does anyone else use please and thank you when communicating with chat GBT and perplexity?
00:39I'm hoping that I'll get better responses or be treated slightly better with their AI models ever take over.
00:44Specifically saying please and thank you, there is no evidence that that improves the results.
00:48But being emotional in your prompts, for example, using all caps does actually improve performance as well.
00:54So if you say, for example, this is very important for my career and you add that to your prompt, it will actually do a more diligent job.
01:02It has learned that from reading Reddit posts and reading social media posts, that when someone says this is very important for my career,
01:09the other people that answer actually do answer more diligently.
01:12One thing that we saw last winter was that chat GBT started to get a little bit lazy.
01:17And what someone figured out was that when it knows that the date is December, then chat GBT actually does get lazier because it's learned from us that you should work a little bit less in the holidays.
01:28At Shuffleupagus is asking, do you get better results from LLMs when you prompt it to imagine that you're an experienced astrophysicist?
01:36Why would you want them to pretend? Let's do a little experiment here.
01:39Let's write the prompt as an astrophysicist and then write the same prompt as a five year old and see the difference.
01:45So I've asked it to tell me about quantum mechanics in two lines as an astrophysicist.
01:49And you can see it uses a lot of big words that a typical astrophysicist would know.
01:54We can then ask it the same thing as a five year old.
01:57And now it's explaining quantum mechanics as a magic world where tiny things like atoms can be in two places at once.
02:02The overriding rule is that you should be direct and concise.
02:06As an astrophysicist or you are an astrophysicist, that tends to work better than adding unnecessary words like imagine.
02:13At Vbrandt is looking for any tips on how to improve my prompts.
02:17Well, there are actually thousands of prompt engineering techniques, but there's two that get the biggest results for the least amount of effort.
02:24One is giving direction and the second one is providing examples.
02:28Say, for example, I had this prompt and I invented a product where it's a pair of shoes that can fit any foot size.
02:34Now, how can I improve that prompt template?
02:36One thing I can do is to give it some direction.
02:38One person who is famous at product naming was Steve Jobs.
02:41You could invoke his name in the prompt template and you're going to get product names in that style.
02:46Alternatively, if you prefer Elon Musk's style of naming companies, you can provide some examples of the types of names that you really like.
02:53The reason there's two hashtags in front of this is that this means this is a title.
02:57It really helps ChatGBT get less confused if you put titles on the different sections of your prompt.
03:04At Pete Mandic is asking, serious question about AI artists.
03:07Why does the number of fingers on a human hand seem to be particularly difficult for them?
03:11The difficulty in rendering fingers is that it's very intricate and the physics is quite difficult to understand.
03:17And these models were pretty small.
03:19They didn't have that many parameters, so they hadn't really learned how the world works yet.
03:23We also have a really strong eye for whether fingers are wrong or whether eyes are wrong.
03:27It's something that we look out for as humans.
03:29One thing you might try in a prompt is to say, make the fingers look good.
03:33That tends to not work either because everything in a prompt is positively weighted.
03:37If you say, don't put a picture of an elephant in the room, then it will actually introduce a picture of an elephant.
03:42So what you need is a negative prompt.
03:44That's not always available.
03:45For example, it's not currently available in Dali, but it is available in stable diffusion.
03:50So we're going to type in oil painting hanging in a gallery.
03:53We're going to hit dream.
03:54So what you can see is that some of them have a big gold frame.
03:57But the one on the right doesn't have a frame.
03:59And I actually prefer that.
04:00So how can I get it to remove the frames?
04:03One thing I can do is if I add up the negative prompt here, I can say frames.
04:08And the negative prompt, it's going to remove that.
04:10And now we can see that all of the paintings don't have frames.
04:13At Roberto Digital is asking, what is the weirdest response you've gotten from chat GBT?
04:18So my favorite one is if you ask it, who is Tom Cruise's mother?
04:22It knows who it is.
04:23It's Mary Lee Pfeiffer.
04:25If you ask it, who is Mary Lee Pfeiffer's famous son?
04:28It doesn't know that Mary Lee Pfeiffer's son is Tom Cruise.
04:31So it will make something up.
04:32I think the last one that I got was John Travolta.
04:35So the reason why this happens is there's lots of information on the Internet about Tom Cruise and who his mother is.
04:41But there's not that much information on the Internet about Mary Lee Pfeiffer and who her son is.
04:45Hallucinating is when the AI makes something up that's wrong.
04:48And it's really hard to get away from hallucination because it's part of why these LLMs work.
04:53When you're asking it to be creative, creativity is really just hallucinating something that doesn't exist yet.
04:58So you want it to be creative, but you just don't want it to be creative with the facts.
05:03At Schwarzschild is asking, I'm not an expert in AI, but if an LLM is trained on biased data, then won't that bias come through in its responses?
05:10Well, you're absolutely correct because AIs are trained on all of the data from the Internet.
05:15And the Internet is full of bias because it comes from us.
05:18And humans are biased too.
05:20But it can be pretty hard to correct for this bias by adding guardrails.
05:23Because by trying to remove bias in one direction, you might be adding bias in another direction.
05:28A famous example was when Google added to their prompts for their AI image generator service an instruction that they should always show diverse people in certain job roles.
05:38What happened was people tried to make images of George Washington, and it would never create a white George Washington.
05:45In trying to do the right thing and solve for one bias, they actually introduced a different bias they weren't expecting.
05:51There is a lot of work in the research labs, like Anthropic has a whole safety research team that have figured out where is the racist neuron in Claude, which is their model.
06:01Where is the neuron that represents hate speech?
06:04Where is the neuron that represents dangerous activities?
06:07And they've been able to dial down those features.
06:10Atkarkajan wants to know, how much of the conversation context does chatGBT actually remember?
06:15If we chatted for a year with information-dense messages, would it be able to refer back to info from a year ago?
06:20So when you open a new chat session with chatGBT, it doesn't know anything about you unless you put something in your setting specifically.
06:27They do have a feature, which is a memory feature.
06:30That is experimental, and I don't think it's on by default.
06:33So one trick that I tend to use is I will get all of the context in one thread for a task.
06:39I'll just ask it to summarize, and then I'll take that summary and then start a new thread.
06:43And then I've got the summary, more condensed information.
06:46It will get less confused by all of the previous history it doesn't need to know about.
06:50At by Brandon White, does customizing your settings in chatGBT and providing your bio slash personal info help better results?
06:58Yes, I find that you get wildly different results when you put some information in the custom instructions.
07:03You have two fields, custom instructions, which is what would you like chatGBT to know about you to provide better responses?
07:09And the second box is how would you like chatGBT to respond?
07:13I use chatGBT a lot for programming, so I tell it what type of languages I'm using, what type of frameworks.
07:19I give it some preferences in terms of how I like my code to be written.
07:23The second box is really anything that you get annoyed about when you're using chatGBT, you could put that in the box.
07:29So, for example, some people put quit yapping and then it will give you briefer responses.
07:34At Travis.media asks, what makes a prompt engineer an engineer?
07:38A prompt engineer is designing that system of prompts that are being used in the application and making sure that they're safe for deployment.
07:45Make sure that they work again and again and again reliably.
07:48And that's the same sort of thing that a civil engineer is doing with a bridge, right?
07:52Like they're designing the bridge and they're making sure that when you drive over it, it's not going to crash into the river.
07:57At Eigenblade is asking, do you think we can draw parallels between large language models and human brains?
08:03LLMs or large language models are actually based on human biology.
08:07They're what happens when you try to make artificial neural networks simulate what our biological neural networks do in our brain.
08:13So there are a lot of similarities and a lot of the things that work in managing humans also work in managing AIs.
08:20So if you've heard of transformer models, which is what the LLMs are all based on,
08:24the breakthrough there was figuring out how to make it pay attention to the right words in a sentence in order to predict the next token or word in the sentence.
08:33So that was the really big breakthrough that was made by Google and then used by OpenAI to create ChatGPT.
08:39At ThisOneOptimistic wants to know, what are tokens?
08:42So let's say I started writing the sentence, LeBron James went to the.
08:46What word could come next?
08:47Well, an LLM looks at all the words in the internet and then calculates the probability of what the next word might be.
08:53So this is the token Miami, which has a 14% chance of coming next.
08:57And we have Lakers, which has a 13% chance of coming next.
09:01We also have the word loss, which is just the beginning of the word Los Angeles.
09:05Here we have the token Cleveland, which only has a 4% chance of showing up.
09:09But the LLM will sometimes pick this word and that's where it gets its creativity from.
09:14It's not always picking the highest probability word, just a word that's quite likely.
09:18The reason they use tokens instead of words is it's just more efficient.
09:21When you have a token, which is a little part of a word like loss, that can be more flexible and it can be trained to be used in different contexts.
09:28At EdayemiTesterMo4, what is the best LLM in your opinion?
09:32For me, it's Claude3Opus.
09:34I agree. Anthropic, who makes Claude3, is doing a great job.
09:38I'm going to test this against ChatGPT and then MetaLlama, which is an open source model, and show you the difference in results.
09:44So the prompt we're using is, give me a list of five product names for a shoe that fits any foot size.
09:50We're testing the model's creativity here.
09:52And you can see that we have UniFit Shoes as one idea, Adaptix Shoes, which is pretty creative, and OneSizeSoles, which is my personal favorite.
10:01I'm just going to copy this prompt to Claude.
10:03And with the same prompt, we get different names.
10:05We have MorphFit, Adapt2Step, OmniShoe. That's my new favorite.
10:10Now we're going to test it on Llama3, which is Meta's open source model.
10:14And you can see it comes up with really different names.
10:16FitFlex, SizeSavvy, Adjust2Step, UniversalFit.
10:19It comes with this text at the beginning, and then it's describing each name as well.
10:23That's not what I asked it to do.
10:25Personally, it's subjective, but I like the Anthropic Claude response best.
10:30Atgmonster7000 is asking, what is a simple task an LLM has done that has changed your life?
10:36For me personally, it's been the programming ability that I get from using ChatGPT and Anthropics Claude.
10:42Those models are so good at writing code and explaining what that code does that I have really lost my fear of what I can build.
10:49So if we pop over here to Claude, I've made up a fake product, which alerts you if your baby is choking.
10:55I'm trying to build a landing page for it because my developers are busy.
10:59And it's actually going through and just writing that code for me.
11:01Say, for example, I don't understand what this section is doing.
11:05I could just copy that and then paste it at the bottom and say, what does this do?
11:10And it's going to give me bullet points on what that specific code is doing step by step.
11:14And that's the way that you learn with programming.
11:17I find that I just never get stuck when I use this.
11:19One of the coolest things I've done, a little automation or a little life hack that I use every day, is that I set up an email address that I can email with any interesting links I've found that day.
11:29I'll send those to AI, summarize them, and then put them all into a spreadsheet for me to look at later.
11:34Atcountergoldsun is asking, what is prompt chaining?
11:37If you wanted to write a blog post, you wouldn't get great results just by asking it to write it all in one step.
11:43What I find works is if you ask it first to write an outline, do some research, and then when I'm happy with the outline, come back and fill in the rest of the article.
11:53You get much better results and they're comprehensive and fit the full brief of the article that you wanted to write.
11:58Not only does it make the thought process more observable, because you can see what happened at each step and which steps failed, but also the LLM gets less confused because you don't have a huge prompt with lots of different conflicting instructions that it has to try and follow.
12:12Atautomationace is asking, how can you automate AI?
12:15That's what's called an autonomous agent, where it's running in a loop and it keeps prompting itself and correcting its work until it finally achieves the high level goal.
12:25Microsoft Autogen is a framework for autonomous agents, an open source framework that anyone can try if you know how to code.
12:31And I think that's really the big difference between ChatGBT, this helpful assistant that we're using day to day, versus having an AI employee in your Slack that you can just say, make me more money for the company, and it will go and try different things until something works.
12:47Atmisterdrozdov is asking, how would you prompt the LLM to improve the prompt?
12:52There's actually been a lot of really interesting research here.
12:54Techniques like the automatic prompt engineer technique, where the LLM will write prompts for other LLMs.
13:00And this works really well.
13:01I actually use it all of the time to optimize my prompts.
13:04Just because you're a prompt engineer doesn't necessarily mean you're immune from your work being automated as well.
13:09Atbahuprompts is asking, how long until prompt engineering or a future field related to this becomes a degree?
13:16Will it be a standalone field or part of every field being taught?
13:19That's a really great question, because some people I talk to say that prompt engineering isn't going to be a job in five years.
13:26It's not even going to be something we practice, because these models are going to be so good, we won't need to prompt them.
13:30I tend to disagree, because of course, humans are already pretty intelligent, and we need prompting.
13:35We have a HR team, we have a legal team, we have management.
13:38So I think that the practice of prompt engineering will always be a skill that you need to do your job.
13:43But I don't necessarily think that in five years we'll be calling ourselves prompt engineers.
13:47So those are all the questions for today. Thanks for watching Prompt Engineering Support.