• last year
“Promptography” is what Berlin-based artist Boris Eldagsen calls his creative process. He inputs highly specific and deliberate text prompts into generative AI programs like DALL-E or Midjourney, and tweaks their outputs repeatedly to create thought-provoking photographs…or at least, what look like photographs. Senior video producer, Becca Farsace flies to Berlin to investigate how exactly Boris process works, how he’s fooled award shows, and what her final thoughts are on this new age of generative AI art.

Category

🤖
Tech
Transcript
00:00 Earlier this year, this photo by Berlin artist Boris Elgazden won a Sony World Photography
00:05 Award.
00:06 Sony's press release called it "haunting" and "reminiscent of the visual language
00:09 of 1940s family portraits."
00:11 But Boris rejected the award because this photo was AI-generated.
00:16 As a person who makes a living as a photographer and videographer, on principle I want to hate
00:20 all of this work and all of the tools that make it possible.
00:23 But in reality, I haven't been able to look away.
00:26 So I went to Berlin.
00:27 Mom, I made it!
00:28 To see how it's done.
00:30 I'm Boris Elgazden.
00:31 I'm a Berlin-based visual artist working with photography, video installation, and
00:38 for one and a half years now working with artificial intelligence.
00:42 These are pieces from Boris' series "Pseudomnesia No. 3," meaning "fake memories."
00:47 It fuses the visual language of the 1940s and post-war photography with abstract art.
00:52 It was entirely created with text-based AI image generators.
00:55 Text forms for me are quite complex.
00:57 I can go up to 13 text-formed elements, and you see they're kind of poetic.
01:03 He considers himself a "promptographer," or someone who uses text-to-image generators
01:08 to create the visual imagery that he wants.
01:11 And his process takes a great deal of time.
01:13 What you're about to see is a graphical representation of that process.
01:16 I still like to start with text-to-image.
01:20 The result is just an interim product.
01:22 I use the result and I blend it in mid-journey.
01:27 That again is just an interim product.
01:30 I use those images, combine them with text, so it becomes an image prompt, create something
01:37 new out of it.
01:38 And this I do multiple times.
01:41 And in the end, I spend one or two days in post-production doing what in the past was
01:47 called in-painting, out-painting, and what Photoshop calls these days generative fill
01:52 or expand.
01:54 And then it can take two months to have 15 images produced.
02:00 But clearly there's more to it than that.
02:03 So after he told me about his process, I asked him if he would sit down and show me how it's
02:07 done.
02:08 Here, Boris is using mid-journey, which you access via Discord.
02:11 I could also use a negative text prompt.
02:14 It's one of many generative AI programs.
02:17 Weights, and they can be positive or negative.
02:20 One that I thought I was quite familiar with, but Boris was speaking a whole different language.
02:24 I could now exchange certain elements.
02:27 And I was totally confused.
02:29 I want to have the seed of those images.
02:32 My red flag is thinking that I can do anything that anyone else can do.
02:36 So after spending two days with Boris and learning all about his process, I came home
02:39 to see if I could do it too.
02:41 More on that after the break.
02:43 More and more, we're seeing AI tools be integrated into our daily lives.
02:47 From generating quick, inspiring art, to capturing notes from important meetings.
02:51 But with SAP Business AI, their tools are designed to deliver real-world results, helping
02:56 your business become stronger and helping you make decisions faster.
03:00 This revolutionary AI technology allows you to be ready for anything that is thrown at
03:04 you.
03:05 Okay, that's it for me.
03:06 But before we go, SAP doesn't influence the editorial of this video, but they do help
03:11 make videos like this possible.
03:13 The best thing about attempting to be a promptographer is that you really don't need much to make
03:16 it happen.
03:17 I'm going to start with a simple object, a pancake, and then I'm going to try to build
03:20 a scene around it and have it look like film photos that I love, which are usually an Ektar
03:24 100.
03:25 It's my favorite film stock.
03:26 And then I'm also going to try to put it in the 1970s, because that's something that I
03:29 can't really do IRL.
03:32 Imagine a pancake.
03:34 How could this possibly go wrong?
03:36 Okay, here we go.
03:39 Those are good pancakes.
03:41 It leaned cartoon for three of the four.
03:45 Let's change that.
03:46 Let's change that.
03:47 The first thing Boris taught me was to build out my prompt.
03:50 Pancake photorealistic, diner, 1970, film texture, Kodak Ektar.
03:57 I like the text on the building over here.
04:00 I want more of that.
04:02 It's giving me kind of what I want, but I want more of like a scene.
04:06 This feels very stock still.
04:09 I also feel like it's not really getting my film look.
04:12 My process continued on like this for hours.
04:15 Boom, green.
04:17 What happened here?
04:19 We got wonky forks.
04:21 And it looks like the ceiling fell down on this stack of pancakes.
04:24 How do I reel this in?
04:26 It also didn't give me any text on that last one I want.
04:29 And with each prompt came incredibly different results.
04:33 So many pancakes.
04:35 And then I remembered what Boris told me about seeds.
04:38 The seed is like a geolocation in the latent space of the training data.
04:45 And using the same seed over and over again, you can really work on a text prompt.
04:50 If you put the same prompt into Midjourney twice, it will always generate different images.
04:54 That's because this program was designed to be random.
04:57 But every time you generate an image, the program selects a seed value for it.
05:01 And in really simple, not technical terms, the seed value is like a code for a particular
05:06 look and feel.
05:07 So if you want to maintain a consistent look with each image generation, you have to input
05:11 a similar prompt and then also input the same seed value as a previous image that you liked.
05:16 This process will help you fine tune a prompt.
05:19 I really liked what I got here and I have the seed.
05:21 So now I'm going to use the same prompt with the same seed.
05:26 And I should get a very similar result that I can then manipulate and change.
05:31 Fingers crossed.
05:33 After many hours and even more stacks of pancakes, I still hadn't achieved the exact look I
05:38 was going for, though I was happy with some of the things I was getting.
05:42 I feel like I'm just scratching the surface of any sort of skill that Boris has achieved.
05:49 I mean, this like, this could not be more different than picking up a camera and taking
05:54 a photo, even if the results are similar.
05:58 Which brings us all the way back to that photo competition that Boris duped.
06:02 When he entered his work into the Sony World Photography Awards, his intention was to make
06:05 a statement about the need to recognize the work he was doing as something different from
06:10 photography.
06:11 No, I think it's very important that it's separated into different categories or different
06:19 competitions because the way the image is produced is differently.
06:25 It's different technologies.
06:26 It's different forms of images.
06:28 It doesn't matter if they look the same.
06:30 It is not the same.
06:32 I can have a plastic lemon and a real lemon and they don't taste the same.
06:36 And that is what I'm trying to help, to get going.
06:43 This is why we needed a new terminology.
06:45 My suggestion was promptography.
06:48 And this is why it's important to talk about workflow and motivation.
06:52 Photographers tend to go out into the world, to be present at a certain location, interacting
06:59 with people.
07:00 Yes, there is technology and AI in the cameras, but we are still needing light that is reflected
07:05 from them.
07:06 And creating images with AI, you don't.
07:09 I can sit in a dark cellar.
07:10 I just need my technology and Wi-Fi and that's it.
07:16 On our last night in Berlin, there was an opening for Boris's work at a gallery that's
07:20 dedicated to AI-generated art.
07:23 So what you're looking at is both AI-generated photos and found photos from the time.
07:28 All of the real photos are in brown frames and the AI-generated and manipulated images
07:33 are not in frames.
07:36 Seeing Boris's process and knowing how these images were created completely changed how
07:40 I viewed them that night.
07:46 Going forward, it's going to be important that everybody knows how the images that they're
07:49 viewing were created, which is something we are very much struggling to do currently.
07:55 And giving Promptography a platform allows everyone to know more about the tools that
07:58 are available.
08:00 And in Boris's mind, this will advance all art forward.
08:03 There is a study about chess.
08:05 And you know at a certain level in time, humans were not able to win against chess computers.
08:11 But did we stop?
08:13 No.
08:14 We continued to play and we used chess computers for training.
08:18 So the level on which chess is played today is much higher than before the invention of
08:24 chess computers.
08:26 Incorporating and making use of AI creativity, our creativity can get onto a higher level
08:32 and maybe the transformative creativity is something we can focus on.
08:36 This is not the entire story of AI-generated art.
08:39 There are some big problems here around the legality of the training data.
08:44 Where are these models learning from and are those artists being compensated properly?
08:49 We made an entire video about it.
08:50 I'll link it down below.
08:51 I'm Becca.
08:52 Thank you for joining me.

Recommended