Paula Goldman, Chief Ethical and Humane Use Officer, Salesforce Moderator: Nick Lichtenberg, Executive News Editor, FORTUNE
Category
🤖
TechTranscript
00:00 Hey, everyone.
00:00 Thanks for joining us today.
00:02 We are wanting questions from the crowd.
00:05 So if anything we say resonates with you,
00:07 if you've got a question, please raise your hand,
00:10 say your name and your title.
00:12 We would love to have your questions for Paula.
00:14 I'm super excited to talk to Paula today
00:16 because she's Salesforce's first ever--
00:19 here's the title-- chief ethical and humane use officer.
00:23 And you've been doing that for five years,
00:25 but you've been in ethical technology for a decade.
00:29 So you've been thinking about these things
00:31 that the wider world is now very aware of for about 12 months
00:36 now.
00:37 So describe to us your journey with ethical tech.
00:41 Well, I should say I'm a deep believer
00:43 in the ability of technology to open up incredible things
00:47 for people and society.
00:49 And I spent a lot of time working on tech for good.
00:52 And about a decade ago, at the time
00:54 I was working for Pierre Omidyar, the founder of eBay,
00:57 we kind of looked up and said, oh, wow, tech
00:59 is no longer the underdog.
01:01 Tech needs guardrails.
01:03 And it's been an amazing set of developments.
01:07 But as you were just saying, it's also really gratifying
01:11 to be in a moment where the field of tech ethics
01:14 is in the news all the time.
01:16 And it's actually one of the top questions
01:19 we get from our customers when we're talking about AI
01:22 is around can we trust it.
01:24 So it becomes a set of innovations and features
01:26 in the product.
01:27 Right, and Salesforce has this concept
01:29 of the human at the helm.
01:30 And to me, that conjures up someone on a ship,
01:34 the human at the helm of the ship.
01:36 I think our art is very apt today.
01:39 We've got this human, but they're driving all these gears.
01:42 Does that define your philosophy at Salesforce?
01:45 Yeah, so I mean, if you think about-- so think about where
01:48 we are in generative AI, right?
01:49 So you just mentioned people have been captivated
01:51 for about a year, year and a half.
01:54 And at phase one, a lot of people
01:56 were talking about human in the loop.
01:58 Actually, I feel that that term comes from the military, right?
02:02 It was the notion of having a person check
02:04 before a really consequential decision was made by a machine.
02:09 I think that's no longer good enough, right?
02:12 And I think what is really important right now
02:14 and as we're moving from AI just generating content--
02:18 I say just because it was blowing our minds a year ago--
02:22 to taking action on our behalf, the notion of co-pilots, right?
02:27 We need next level controls.
02:30 We need people to be able to understand what's
02:32 going on across the AI system.
02:35 And most importantly, we need to be designing AI products that
02:40 are taking into account what AI is good at and bad at,
02:43 but also what people are good at and bad
02:47 at in their own decision making and judgment.
02:50 Yeah, so you said co-pilot there.
02:51 I think that's an important word because we
02:54 hear a lot about autopilot in various discussions around AI.
02:58 But you insist it's a co-pilot situation,
03:01 where we've got the human who's got the AI co-pilot next
03:04 to them.
03:05 Right, that's right.
03:07 I flew here.
03:08 I didn't go into a plane that was flying on its own, right?
03:13 And I think the idea of AI is that it's
03:15 going to augment human ability.
03:17 It's going to augment human judgment.
03:19 It's going to augment our ability to do our jobs.
03:21 Better and more productively.
03:23 And to do that, we need to make sure
03:25 that we've got the two strengths working side by side.
03:29 So what have you found about what people are good at
03:32 and what AI is good at?
03:34 Which role should which co-pilot have?
03:36 Well, let me give you a few examples
03:38 that I think are really incredible as we think
03:40 about human at the home.
03:42 I think we do a lot of work AI for service workers, AI
03:47 for service.
03:50 One of our earliest generative AI products
03:52 we used with a major luxury consumer brand
03:56 and their service workers.
03:57 And what they found was that because the service agent
04:03 didn't have to spend as much time looking up
04:05 the answer to a routine question,
04:08 they actually were able to do what really only people could
04:11 do, which is connect with their customers.
04:14 And they found that all of a sudden service agents
04:17 were 20% more doing product conversions.
04:20 They were salespeople now.
04:22 Or think about the doctor that uses AI for note-taking
04:26 and is able to actually pay attention
04:28 to what their patient is saying and what they're not saying
04:32 and ask better questions and get to a better judgment
04:36 or diagnosis at the end of that conversation.
04:38 So really, AI is already transforming jobs.
04:41 People with their AI co-pilots, people
04:44 are becoming salespeople overnight.
04:47 Doctors are becoming a new type of doctor.
04:49 Yeah, it's changing.
04:50 Well, it's definitely changing the nature of jobs.
04:52 And it's freeing up people to do things
04:54 that are going to become higher value for the companies
04:58 they're working in.
05:00 Do we have any questions from the room so far?
05:02 One over here in the back there.
05:06 Tom Whitaker from Law Firm Burgess Salmon.
05:14 Really interested to hear about AI ethics.
05:16 How do you ensure that those who talk about AI ethics
05:20 not only talk the talk, but walk the walk?
05:22 How do you ensure compliance?
05:25 That's a very interesting question.
05:27 So what I would say is it's really important.
05:30 Actually, I think AI ethics is becoming
05:32 a skill in all of our jobs.
05:34 But the way that we approach it at Salesforce
05:35 is actually like building it into the product itself.
05:39 So you and I were talking about this a little earlier.
05:42 The question of how do you build friction
05:44 into the product so people actually
05:46 know how to use things responsibly?
05:47 You said it's mindful friction.
05:49 Mindful friction is a term that we use.
05:51 So I'll give you an example.
05:52 We have a marketing segmentation product.
05:56 So it helps you.
05:57 You want to send an email campaign.
05:58 You want to generate an audience for that email segment.
06:02 So sometimes using demographics for that purpose is fine.
06:07 Sometimes, actually, it's not fine.
06:09 It introduces bias.
06:10 You might be overlooking audiences
06:13 that might be better to target people who buy dresses recently
06:18 than just women of a certain age group.
06:21 And so the question then is, how do you design a product
06:25 so that those types of questions, which
06:27 are matters of judgment, are front and center?
06:29 Well, we did a small thing in that product interface
06:32 where if the AI generates a segment that has demographics,
06:36 those demographics are unchecked.
06:38 And you simply have to check them to bring them in.
06:40 We also have another example, a model builder product
06:43 that enables people to build predictive models
06:46 or bring in their own generative models into our Einstein One
06:49 platform.
06:50 And there again, if you're building a model and you're,
06:52 let's say, you bring in zip code or postal code
06:55 into your model, that can be correlated with race.
06:58 Again, sometimes that is OK.
07:00 Sometimes that introduces unwanted bias.
07:03 We have a little warning toggle that pops up.
07:06 So there are many layers to the question of compliance,
07:11 starting with regulation and a lot of other topics.
07:14 But the piece that I'm actually most excited about
07:17 is how do you design products that it's actually--
07:20 you know what to trust and where you should take a second look
07:24 and apply human judgment to the AI outcome.
07:27 Have you been surprised while designing mindful friction
07:29 there in terms of things you think you can trust
07:32 versus things you know you can trust?
07:34 I think that the part that's been really interesting
07:37 is how do you design controls that
07:40 allow the human to look at the totality of what's
07:43 going on in AI, to the point about human in the loop
07:46 versus human at the hum.
07:47 One of the things that we built into our Einstein One products
07:50 was an audit trail so that you can say,
07:54 you had a whole email campaign.
07:55 You generated a million emails.
07:59 But there were these 50 over here
08:02 that got edited consistently before they were
08:05 sent on to the customer.
08:06 Maybe you have a problem with the knowledge article
08:09 that you used.
08:10 So that's like a signal to you.
08:11 Exactly.
08:12 Use 50 emails out of a million.
08:13 Yeah.
08:14 And I think increasingly, that's-- so we're at stage one.
08:18 Here's an audit trail.
08:19 Increasingly, I think we're heading towards systems
08:22 that can detect anomalies like that
08:25 and encourage and prompt the human
08:27 to take a second look at it.
08:30 Fascinating.
08:30 Do we have another question from the room over here?
08:32 Thank you.
08:36 Bartik Argonowski from Levera.
08:38 At Levera, we coach human skills.
08:40 I absolutely love what you're doing.
08:43 You use the term human intelligence, which is amazing
08:47 and, of course, much more important
08:49 than in many aspects artificial intelligence for humans.
08:53 How do you define or measure human intelligence?
08:58 How long do you have?
08:59 Obviously, this is--
09:01 We have six minutes.
09:02 Yeah.
09:02 This has been debated--
09:03 We have about four minutes.
09:04 I think there's a fierce ongoing debate about what
09:07 is the nature of artificial intelligence, what is
09:09 the nature of human intelligence.
09:11 But I think the way that we measure it,
09:13 that we measure it at least in the context of what Salesforce
09:17 is building, is there are business outcomes
09:20 that our products are intended to achieve.
09:23 And again, to be able to free up that sort of the service
09:26 agent working to develop a deeper human connection.
09:29 Or for example, we were working with an insurance company,
09:33 a health insurance company in the UK.
09:35 And again, by freeing up service workers
09:38 to deal with more routine--
09:39 to not have to deal with the more routine insurance cases,
09:43 but then pay more attention to their vulnerable populations
09:46 and engage with them by phone.
09:48 It's that type of interaction, those types
09:52 of human connections that I think
09:53 are deeply, deeply important.
09:55 And on a part of the landscape of human AI interaction
09:59 and human intelligence that I think--
10:02 I would argue it cannot be automated.
10:06 Interesting.
10:07 So human intelligence is about person-to-person interaction,
10:12 would you say?
10:13 I mean, I would say that is one factor of it.
10:17 One factor.
10:18 But the human intelligence is much broader than that,
10:20 obviously.
10:21 Of course.
10:21 Yes.
10:22 I want to go back to your academic background.
10:25 You have a PhD from Harvard.
10:28 And this was just uncanny to me when I met you,
10:30 in how unorthodox ideas go mainstream.
10:34 And you've been working in tech ethics for a decade now.
10:37 And now it's the most mainstream topic that there is.
10:40 So isn't that uncanny to you?
10:42 Can you believe that that's what you studied?
10:44 And here you are at the Fortune Brainstorm AI in London
10:47 talking about this.
10:48 And it's the top issue in the world right now.
10:52 I love hearing you say that.
10:55 No, I think when I started getting interested in tech
10:58 ethics, it was because I saw that there was an unfilled need
11:02 and that this needed to become super important.
11:06 And so it is very gratifying that it
11:08 has risen in importance.
11:10 And I hope that the attention that's
11:13 being paid to these issues of trust
11:15 continues and is not a momentary thing.
11:18 Because we talk about--
11:21 I'm sure all of you have watched these different waves of AI,
11:24 AI summers and AI winters.
11:27 I think there is no doubt right now about the capabilities
11:30 of AI, that it is the capabilities that
11:35 are proceeding very, very quickly,
11:36 developing very quickly.
11:38 My concern would be that it's possible
11:41 that the next AI winter is caused by trust issues with AI
11:46 or people adoption issues with AI.
11:49 And so it is very important to me
11:52 that these types of trust innovations in products
11:55 and the types of training and on-the-job work
12:00 and the people adoption within companies,
12:03 I think that's what's going to continue to unlock AI
12:06 productivity and AI gains for companies.
12:08 - Yes, we have just a moment for one final thought from you,
12:12 one prediction of what you see next.
12:13 You were just saying trust, do you
12:14 think will be at the center of it?
12:16 - Yeah.
12:16 - I also think since meeting you that you're
12:18 kind of a psychic, that your doctorate, you
12:20 predicted your future career.
12:22 You predicted the tech ethics would be the big question.
12:25 So what is your clairvoyant view of what's coming next?
12:29 - I think-- so two things.
12:31 It is the sort of the company is paying attention
12:34 to the people side of AI.
12:35 I think we're going to see a lot more attention
12:37 not only to the interface and the sort of the--
12:40 how you design AI for trust, but also how do you bring people
12:46 along on the journey?
12:47 How do you allow the employees to experiment?
12:50 So that's one big thing.
12:51 The second big thing, I think, is data.
12:53 - Data.
12:53 - We just have not talked enough about data.
12:56 And I think this year we will start to.
12:59 Data governance, data governance
13:01 as part of the regulatory conversation, which
13:03 has been largely missing.
13:05 And I think that's also--
13:06 I think that that's the second topic, data.
13:09 - OK.
13:10 Well, we're going to leave it there.
13:11 Paula, thank you so much for joining us.
13:13 It was a pleasure.
13:14 - Thanks for having me.
13:14 - Thank you, everyone.
13:15 [APPLAUSE]
13:19 [BLANK_AUDIO]