It’s still the early days of AI in the Philippines, but journalist and UP professor Karol Ilagan describes how AI tools can already impact journalism, from the basic task of transcribing interviews to combing dense COA reports in search of patterns and red flags.
Category
🗞
NewsTranscript
00:00 Good day, podmates! I'm Javi Severino Muli,
00:03 reminding you that a long attention span is a gift for the intelligent.
00:07 Our guest today is a journalist, professor, and researcher
00:12 who is now focused on Artificial Intelligence or AI
00:15 and its effects on the media and on all of us.
00:19 This is Professor Karol Ilagan, who teaches journalism at UP Diliman.
00:24 I've been working with PCIJ for a long time
00:27 where I serve as the chair of the Board of Editors.
00:31 Good day to you, Karol, and welcome to my podcast.
00:35 Good day, Javi. Thank you for the invitation.
00:40 Karol, I was in the audience last week in your presentation
00:45 at the 3rd National Conference on Investigative Journalism,
00:50 where you talked about co-opting AI or Artificial Intelligence,
00:55 investigating AI.
00:58 So please tell us about this research project.
01:01 A lot of people are asking about it.
01:03 So what do you mean first by co-opting AI?
01:08 Okay, this research is for a book.
01:11 The title of the book is "Future of Media in Asia."
01:16 So it's a book series.
01:18 In particular, there's a focus on the media.
01:21 We'll look at what our future will be.
01:24 And maybe for a few years already, AI has been discussed.
01:29 So I thought we should look at how AI is uptaken in the Philippines
01:37 and in other countries in Southeast Asia.
01:40 Although I need to mention, Sir Javi,
01:42 you know that the research is yet to be published,
01:47 so I might not be able to provide details.
01:51 But at least I can share the broad strokes.
01:54 So you're researching the uptake, in other words, the adoption,
02:00 how it's used, and what is the extent of its use.
02:03 I presume that AI is not yet widely used in the Philippines.
02:10 But there are already uses.
02:13 So how is Artificial Intelligence used in our country today?
02:18 I don't think you're mistaken.
02:22 The adoption or use of AI is still low in our country.
02:28 If it's used, it's more on specific to a task.
02:36 For example, we'll transcribe.
02:39 Those are the common AI tools we use.
02:43 Or the most common one we're talking about is the chat GPT or generative AI.
02:50 So our adoption is still low.
02:54 So what got me interested here is that our conversation on AI,
03:00 or the narrative or discourse, is that AI is chat GPT.
03:05 So that's what we're talking about when we look at how it's used
03:12 in many other countries.
03:14 AI was a big thing in particular to investigative reporting, for instance.
03:21 So we have a lot of examples of stories that would not have been possible
03:26 if AI was not used.
03:29 And I'm not speaking of generative.
03:32 Excuse me, Caro.
03:34 I will ask you about how it's being used in investigative journalism
03:38 in other countries especially.
03:40 Because it's not used much here in our country, in the IJ.
03:43 But you already mentioned the chat GPT.
03:46 That is pretty widely used now in the Philippines.
03:50 But we may have listeners who don't know what that is.
03:56 But basically, chat GPT is just one tool that you can ask on the internet.
04:01 It's artificial intelligence driven.
04:04 It can answer a lot of questions.
04:08 And it can actually perform tasks for you.
04:11 You can give it an assignment like, "Write me a speech about investigative
04:16 journalism in the Philippines."
04:17 It will give you a speech.
04:19 I mean, it's not guaranteed that that's what you want to say
04:23 or that it will give you something good.
04:26 But it can give you something, right?
04:28 Of course, chat GPT is just one version of this AI tool.
04:32 I know my son uses BARD, and then other people use other tools, etc.
04:37 But chat GPT is an online tool.
04:40 When we talked about this with J-Marc, of course, there are different applications.
04:44 But the one he's using the most now is Think of it as,
04:50 it's like you're going to Google, but it's contextual.
04:53 It will give you information.
04:57 But because we mentioned generative, it can produce content.
05:02 That's why it became controversial because it can produce content
05:08 like when we produce content also.
05:12 And the big difference with the search engine is that it does not provide you
05:23 with any links.
05:24 It does not bring you anywhere else.
05:26 It's not like Google.
05:28 If you're researching, for example, if you want to search AI itself,
05:35 Google will produce a whole list of links of websites where you can read about AI.
05:44 But AI tools themselves will not give you any links.
05:49 They will just give you what it thinks you want to know.
05:54 And this has a big effect actually on what's called search referrals.
05:58 That's a big source of traffic and eyeballs for a lot of websites,
06:03 a lot of media pages, news sites.
06:08 So it has a big impact in terms of internet models, business models,
06:14 web companies, websites.
06:19 As you mentioned, people are doing their search on AI tools,
06:24 which will not bring you to another webpage, unlike Google.
06:30 So it's actually changing the way we're using the internet.
06:34 Right.
06:35 That's one of the big issues with regards to generative AI tools like ChatGPT
06:42 because I think there are some news outlets that are being blocked
06:49 so that news reports or information from their websites are not used.
06:54 Although I think the new development now is that there are others
06:59 that have licensing agreements with, for example, ChatGPT
07:06 so that information can be used on their websites.
07:10 So this is a bit, right?
07:14 Developments are fast.
07:17 Obviously, there are a lot of considerations.
07:21 We've talked about ethical considerations already.
07:26 And obviously, the impact on news outlets, given our context,
07:31 of course, we're talking about disinformation as well.
07:34 Even more so now, we need to access legitimate news sources more.
07:42 So, the issues with regards to AI tools are mixed.
07:49 I would also like to have a little shift in our discussion
07:54 because obviously, granted, we are not discounting ethical considerations
08:01 and the impact on our jobs, not just in journalism,
08:08 but all sectors and fields of work can be affected.
08:14 What I'm thinking is whether we're missing out on its potential in other things.
08:23 Our discussion is centered on that.
08:27 And it's true that it should be discussed, but that's why it's co-opting AI.
08:34 I'll go back to the question earlier.
08:36 We might be missing out on other ways for AI to be used to advance reporting
08:45 or advance our work as journalists.
08:48 Okay, so based on your research, what are we missing out on when it comes to AI?
08:55 What are the things that are being done in other places,
08:59 other organizations or countries that we can do now
09:04 that can help us in our work as journalists?
09:09 Okay, there are many who are studying AI and journalism.
09:13 Reuters Institute is one of them, and another organization
09:19 that runs what is called Journalism AI.
09:22 They've been doing this for quite some time already.
09:25 Obviously, literature is more focused on the Global North or the experience in the West.
09:33 If you look at AI and journalism, based on these studies,
09:39 it's used in the entire process of newsmaking,
09:47 finding stories, producing stories, and presenting stories.
09:53 So, this is where we'll start, of course, with transcriptions.
09:59 And then we'll go to the tools that can process data or sift through data.
10:07 I think this is one of the most powerful things that AI can do
10:14 because there might be stories that we can't see if we do it manually.
10:21 But with the use of AI tools, it will be easier to find stories.
10:29 It can also help with the distribution or presentation of stories.
10:35 For example, for a lot of reporters, and especially maybe in the Global South,
10:43 where our newsrooms are small,
10:46 oftentimes, the context of the Global South is that the journalists are multitaskers.
10:53 So, you might be very tired of compiling your story,
10:57 but you're still the one who will do the social media, for example.
11:01 So, that's one of the findings.
11:03 AI can also help in the last part of the production process in journalism.
11:14 But let's go back to what you said about story production,
11:18 how it helps in the creation of stories, in research, in reporting.
11:22 Do you have examples of how AI can help?
11:27 Have you come across any good reporting where AI helped?
11:35 Actually, in 2019, there weren't that many examples of AI-aided investigations, for instance.
11:44 But during the pandemic, it had a big impact on us when we were used to going out to the field.
11:52 So, now, we're forced to make do with what we have, we have restrictions, mobility restrictions.
12:00 What I noticed in the research is that we had a lot of examples,
12:08 and one of them is the ones produced by the International Consortium of Investigative Journalists or ICIJ.
12:16 So, they use their version, when we talk about specifically what AI tool they used,
12:28 they use machine learning to detect patterns.
12:32 So, if you have a lot of records, that's what they used, computer vision to extract data.
12:42 So, let's say we're talking about offshore leaks, for instance,
12:47 where you see the names of Filipinos in their dataset.
12:53 So, computer vision, for instance, can crawl millions of data and documents
13:01 to find where the name of the leak is coming from.
13:05 So, what does this big leak mean about this particular person?
13:14 So, for example, in that ICIJ project, PCIJ took part in the Paradise Papers.
13:23 So, ICIJ is the same organization that came up with the Panama Papers, for instance.
13:28 But in the Paradise Papers and the FINS and FILES,
13:34 this is one of the projects or collaborations where PCIJ participated.
13:40 So, essentially, when we were logging in to the repository,
13:45 we were using AI to crawl the documents and data.
13:51 So, let me go back to the Paradise Papers, how the names came out.
14:01 For instance, we have a few members of government.
14:10 So, you're describing the method.
14:11 In other words, AI can help the reporter go through lots and lots of data
14:19 that normally in a previous era, you would have to look at with the naked eye.
14:23 It would take you days to look through maybe millions of data points.
14:30 AI can do it in minutes or even seconds and produce the results that you want.
14:37 The ICIJ stories have resulted in some of the most impactful global collaborations.
14:43 You mentioned the Paradise Papers.
14:45 How does AI help in making those stories?
14:49 What's the story?
14:51 So, essentially, it means that there are government officials who use offshore havens.
14:59 To partner money?
15:00 Yes, to evade paying taxes or there are activities that they will do in a tax haven
15:09 so that they can't pay taxes.
15:14 Then, they also found out that apparently,
15:18 the other activities are related to the supply of arms.
15:25 So, it's related to war as well.
15:28 So, there are findings in this investigation.
15:33 Again, it would have been probably impossible to find without the use of AI.
15:41 Okay, because somebody listening might be telling themselves,
15:45 "Can't you use the find function in an Excel sheet?"
15:52 I mean, why would you need AI to find the names which are there in the data?
16:00 You can search for it.
16:01 What's the advantage of using AI in a project like that?
16:06 Well, first, we're talking about millions of files.
16:11 So, it would really be impossible to do that.
16:14 We'll just use Ctrl+F, for instance.
16:17 So, obviously, it's a big thing that this tedious process can really save time.
16:24 And second, the accuracy as well.
16:27 Because if we do it manually, it would be really difficult if you just look for it one by one.
16:35 And third, it can also be in a different context.
16:38 I'll give another example.
16:40 So, this is a story done by Armando Info two years ago.
16:46 So, this is in southern Venezuela.
16:49 So, they started with the lead that there are many illegal mines operating in southern Venezuela.
16:57 Now, the context there is that these are places that obviously cannot be visited
17:03 because there are gangs there.
17:06 So, it would be dangerous for a journalist to plot where the illegal mines are.
17:15 So, what the team did, this was led by Joseph Polisok and his partner Maria Ramirez.
17:26 When they looked at the satellite imagery, the only identifier of the mine was that it had an airstrip.
17:35 So, obviously, for the supply to be obtained, there needs to be a landing.
17:43 The airplane needs to land there.
17:46 So, that's what they used.
17:48 So, they used AI. They fed into the system that if there's an airstrip, it's potentially a mine.
17:58 So, that's what they did.
18:00 So, from just a few mines that they knew existed,
18:05 they were able to track the extent of illegal mining in southern Venezuela.
18:12 Of course, again, they used AI to illustrate the extent of the problem.
18:22 It's not just a matter of whether it's a one-off thing, but there's a danger, there's a risk to the journalist.
18:29 If you would do it, you would go to field.
18:33 So, that's another example.
18:35 Okay, you mentioned earlier, Carol, J. Mark.
18:38 You're referring to J. Mark Tordesilla, who is a former editor-in-chief of GMA News Online.
18:45 He was also previously with PCIJ.
18:48 But currently, he's at Harvard as a Nieman Fellow.
18:52 And one of his projects there has been to develop AI tools for investigative journalism.
18:59 Have you had a chance to chat with him, talk about this with him, and actually try out his tools?
19:08 Yeah, I've tried the Coa Beat Assistant.
19:14 And then I think now he's doing something new, looking at budget records.
19:19 I think what J. Mark is doing now is good
19:23 because he's aligning it to how we can use it, the Filipino journalists,
19:31 in particular to what are the usual pain points when we report.
19:37 So, obviously, it's a big thing when it comes to budget.
19:41 So, obviously, the COA reports.
19:44 Commission on Audit.
19:45 Yeah.
19:46 Audit reports.
19:47 They often flag misuse of money or kind of suspicious uses of money, etc.
19:56 So, how was AI used in investigating or examining commission on audit reports?
20:07 So, in the tool he made, the big help was in finding leads.
20:18 So, you can ask what the key findings are from this particular commission on audit report.
20:27 Now, obviously, of course, for journalists, it doesn't mean that whatever tool you use, that's what we'll report.
20:35 But I think the big thing then, as opposed to reading the entire thing,
20:44 which ideally, we know that it's very technical,
20:49 the big thing is that we have a starting point to lead.
20:52 So, for example, I was looking into Marikina City's commission on audit report.
20:59 One of my students, last semester, they were looking at the use of special education fund.
21:06 So, when we tried the tool, it came out that there seems to be an issue with the special education fund in Marikina City.
21:16 So, in the process, it helped in finding stories.
21:21 Then, your time that you'll spend in finding stories, now you can use it to analyze what happened in the use of special education fund in Marikina City.
21:35 Okay, so this co-assistant, that's what J. Mark has been calling it or that's what you referred to it as, it's in the public domain.
21:45 I mean, it's free. If a journalism student or anyone who wants to try out the tool, it's just out there.
21:57 It's a search and people can access it.
22:01 When we were using it for the Marikina COA report, the language of the assistant on the conclusive side,
22:14 so of course, as a journalist, we should be careful because we haven't even seen the record itself and what the co-assistant based on.
22:26 So, we know that this is public but J. Mark Tordesilla is also very mindful that when using it, of course, we also have to check the original file.
22:41 Yeah, and there's always a reminder from media organizations that there should always be human overview.
22:50 Don't let AI do all of the work.
22:52 Let's just pass it on to the public. That's why it's just an assistant.
22:56 It's supposed to just assist us. So, whatever conclusions it has, you're right.
23:02 We should be skeptical, we should question it, and we should come up with our own conclusion.
23:08 Maybe aligned with the inputs of AI but it should also come from our own brains.
23:16 That's why verification should not be lost. In UP, we invited him to speak about the COA assistant.
23:25 And we also talked about the importance of shortcuts because obviously for us journalists,
23:35 before the COA assistant came, which is this AI tool, we've been able to experience what the COA report really reads.
23:44 Even if it's not productive, it's a big thing for us that we can apply to other reporting.
23:52 Because we know what the composition of the COA report is, how we will analyze it, or who else should I speak with to explain the COA findings.
24:05 So, that's also what we're talking about. We should also be mindful of the possible shortcuts that we can experience if we use AI tools.
24:18 So, we should use it by doing our own research, of course, and verification.
24:26 Earlier, you mentioned that people, journalists especially, have been using AI to transcribe interviews.
24:37 Our profession involves a lot of interviewing and transcribing of interviews so that we don't make mistakes or we don't just base it on our notes or our memory.
24:53 Sometimes, it takes hours and hours to transcribe interviews.
24:57 Before, more and more of us are using AI tools to transcribe.
25:02 What can you tell us about this? Is this accurate? That's a big question.
25:07 And in terms of the language, can they do this?
25:11 For example, what we're talking about today, Taglish.
25:14 In the Philippines, there are many languages. There are many people who speak Bisaya, Ilocano, etc.
25:21 What are the limits of transcribing and what are its potentials?
25:28 Well, of course, compared to maybe several years ago, I think the leap of improvement of transcription tools is huge now.
25:37 So, from the ones I've tried, it's useful in a sense that I won't be using all the transcripts of the stories.
25:49 But at the very least, I can identify that this is what is important.
25:54 But I always make it a point to listen back to the audio recording because there are limits.
25:59 Because as you said, it's not yet that popular in the Filipino language.
26:06 It's also common in Bahasa.
26:09 So, there's no tool that can really get the Filipino or even more so the diadex.
26:18 But my observation is that it can capture something that is Filipino.
26:22 So, it's still helpful, but there are still limits.
26:27 It's also not very tedious for, let's say, an intern or a student to do the transcribing.
26:36 So, that tool is a big deal now.
26:38 I myself, I transcribe also.
26:41 So, all journalists go through that.
26:43 So, it's made the job of journalism a bit easier.
26:46 But the flip side of co-opting AI, seeing how it can benefit our profession and other professions,
26:58 the flip side is investigating AI.
27:02 What do we need to investigate?
27:04 I noticed in your presentation at the conference last week, you mentioned an AI accountability network.
27:12 So, making AI accountable. What should we be watching out for?
27:17 What should we be careful about when it comes to AI?
27:21 So, since last year, I joined as an AI accountability network fellow of the Pulitzer Center.
27:28 So, this is just the second cohort. It's still new.
27:33 But basically, this group, we're working on AI accountability stories.
27:39 So, essentially, what we're looking at is, are the tools or services we use, if they use AI or if they use the so-called algorithms,
27:52 we're looking at how this impacts our daily lives and whether or not there's something hidden in these algorithms
28:03 or whether these algorithms are regulated.
28:07 Because if we look at it, aren't the products like food products, cosmetics,
28:13 they go through rigorous testing before you put them on the market?
28:20 But when it comes to technology, we have a lot of gaps.
28:27 So, maybe we're not just familiar, but maybe a lot of the tools that we use now on our smartphones,
28:35 our apps, they actually use algorithms that we don't know who made those decisions.
28:43 And what impact does it have on us based on how we use it?
28:50 So, this is what we're looking at.
28:54 What's the focus of this reporting?
28:57 Well, obviously, reporters in the US and the UK, they look at Silicon Valley.
29:05 This became part of the tech reporting, AI accountability.
29:10 A large part of this is still not explored, especially in the Global South and of course the Philippines,
29:17 because our role in AI is also big.
29:21 So, earlier, what I mentioned was investigating algorithms or the so-called black boxes.
29:28 But if we look at how AI works,
29:33 this is because we carried out a lot of the digital work that AI systems need to work.
29:41 So, this is where the term data labeling comes from, the so-called platform work.
29:48 So, this is also part of AI accountability reporting.
29:53 That's what the reporters who are part of this network are looking at.
29:58 Then, next to this, they will have training so that there will be more AI accountability reporters.
30:09 Actually, I recall one of the slides in your presentation last week was about digital sweatshops in the Philippines.
30:18 These business processing centers in the Philippines are based in Cagayan de Oro.
30:28 These digital sweatshops.
30:30 Supposedly, according to the Washington Post article,
30:33 the reporters there are doing deep research.
30:39 They say that there are actually a lot of people all over the world,
30:46 and in the Philippines, there are 2 million working in these so-called digital sweatshops at very low pay.
30:54 They are doing quality assurance or quality control.
31:00 As you said earlier, data annotation.
31:03 Essentially, for AI tools to work, they need lots and lots of data.
31:09 Where does it come from?
31:11 But the data also has to be good data.
31:14 The data used by AI cannot be made up by someone else.
31:22 That's why there are workers in the Philippines who are labeling it and reviewing it for accuracy.
31:32 Maybe deleting some or recommending some for deletion or non-use by AI.
31:41 Not many people know that there is such a thing.
31:45 That's one of the things you said in your presentation, that there's a very human element behind all of these AI machines.
31:56 The machine learning of AI.
31:58 There are a lot of human inputs there.
32:01 It's interesting because when we talk about AI, as you mentioned,
32:05 there are a lot of human components.
32:08 For example, the ones I talked to, as I understand, these are for self-driving cars.
32:16 So they mark road signs.
32:19 It's like they're watching a video that has a road.
32:24 I think it's a Japanese road, the one I saw.
32:27 So you can think that it's probably for a Japanese client.
32:33 So you mark a sign that there's a pedestrian here or a bridge.
32:40 So that kind of work.
32:42 That's what you do the whole day.
32:44 There's an example that I was told by a fellowship in Brazil.
32:53 The AI trained is for the robot vacuum.
32:59 But what they do is you have to take photos of dog poop.
33:07 Because apparently, the AI doesn't know that it's poop, so it can clean it.
33:14 So you need to input, feed data that this is what the poop looks like.
33:21 So you shouldn't clean it.
33:23 So there's that type of data labeling work.
33:28 So these are the types of work that apparently a lot of countries in the global south,
33:37 Philippines, Latin America, India also do.
33:47 So our role in the supply chain is big.
33:51 If we look at the labor concerns which was brought up in the Washington Post report,
33:59 of course, the pay is small.
34:02 We will also talk about platform work.
34:08 Because they are not regular employees, so digital workers are also subject to exploitation.
34:18 As I said in the article, the salary is not low, but often their salary is delayed.
34:25 And there are instances where they are not paid.
34:28 But because of the lack of work in many places in the Philippines,
34:32 many Filipinos are really suffering.
34:35 But for those listening and considering work like that,
34:39 maybe that's something to think about.
34:41 I want to ask you, because you mentioned earlier,
34:44 the questions about AI, is it going to cause the displacement of workers?
34:49 That's inevitable with new technologies, right?
34:55 But there's always a promise that it's going to create jobs.
34:58 So it's going to displace workers, but they can be retrained for other kinds of jobs
35:07 that are created by new technologies.
35:09 To what extent is that true with AI?
35:11 My research on this is still ongoing.
35:16 But so far, I'm looking at the impact on the BPO sector.
35:23 Reports are saying that 1.8 million jobs will be lost with the adoption of AI.
35:31 But so far, the picture is a bit complicated because you will hear stories that
35:43 they use AI, but that would mean they would do other jobs.
35:47 Or it could be affected, but not the full-time or the regular employees.
35:55 It's better to look at it because outsourced work is a big thing in the Philippines.
36:03 So if this is one of the things that will be affected by AI,
36:06 I think the impact in the Philippines will be big.
36:11 But so far, I can't say anything conclusive yet.
36:15 Let's say, the predictions of 1.8 million, when is that?
36:22 Or have we experienced it?
36:24 But so far, it's consistent with other jobs being done.
36:32 Because it could be done by AI.
36:36 Well, Carol, you mentioned transcribing.
36:39 So a lot of transcribing around the world is already being done by AI tools.
36:44 This particular interview will also be transcribed with an AI tool.
36:49 But we have not let go of our transcriber.
36:54 His job now is to review the transcript because the transcribing tool we use is not 100% accurate.
37:06 So we still need people to review.
37:09 As far as I know, no one has been displaced despite the fact that transcribing has become more efficient.
37:15 But it's still far from perfect.
37:18 So we still need people.
37:20 At least in our field, using AI is still a job.
37:28 But currently, it's just helped in our production.
37:33 I want to pivot a little bit to something that's being discussed a lot, the deepfakes.
37:38 These videos of real people talking nonsense.
37:48 Sometimes, what they say is dangerous.
37:51 There's a danger that they really didn't say.
37:54 So that's why it's a deepfake.
37:56 It's convincing.
37:58 Their voices are mimicked.
38:01 Sometimes, their lips are even following.
38:04 It's crazy.
38:07 Share your thoughts about this.
38:11 What's the potential of that to cause great harm?
38:14 At first glance, it doesn't seem obvious.
38:20 You wouldn't take it seriously.
38:23 Even if it's smooth, you still know that there's something off.
38:29 But that's me.
38:31 From the point of view of someone who's familiar with this.
38:34 I think it's a legitimate source of concern.
38:39 The material that's being released.
38:42 Especially since we've had experience with several elections already.
38:47 I think our first one would be Argentina.
38:50 And then India also.
38:54 Where these tools were used to produce.
39:01 You have videos of a politician or a candidate running.
39:07 That can be sent to voters.
39:10 Or there are robocalls that tell you not to vote.
39:15 So these types of content.
39:20 We should focus on where it's coming from.
39:24 Because I think it can have a big impact.
39:29 Especially when we're talking about it in the context of an election.
39:33 And of course, it will add to our whole disinformation problem.
39:40 We're not blaming people.
39:42 But if that's what's being shown, we might really believe in this material.
39:50 So yeah, I think it's a cause for concern.
39:56 Especially in the context of elections, for instance.
40:00 Okay, in the context of the university where you teach.
40:04 I know that's a big issue in academia.
40:07 Because it's a big temptation for students.
40:11 Even professors also have to write and research.
40:15 So it's a big temptation to use AI.
40:18 What are the boundaries there?
40:20 I'm sure you do not tell your students not to use AI.
40:25 Because you're teaching them tools.
40:28 But how far should they use AI?
40:31 Right now, we're talking about the policy on AI in academia.
40:39 As I understand, the university has released something.
40:44 But it's not as specific to the production of assignments, for instance.
40:51 Personally, I'm worried about the shortcut.
40:56 I'm not saying we should not explore AI tools.
41:01 Because I just said earlier that I'm opting for AI.
41:05 But in the context of the university where we study.
41:11 Especially in journalism, it's a big thing for you to read.
41:19 To synthesize the findings and everything.
41:23 This is something that I don't encourage when you write a report.
41:31 Especially when we talk about original reporting.
41:36 Because obviously, if you use AI, where will you pull it from?
41:40 From material that's already published.
41:43 So you still have to do your own interviews.
41:45 Make sense of the information that you got.
41:49 Okay, Carol, as a professor, can you tell if your students' submissions were the product of AI?
41:58 Is there a way for teachers and professors to know if it's not the original work?
42:05 That they did in the chat GPT or another AI tool?
42:09 I think it's a little bit hard to detect.
42:12 I know there are tools that can detect.
42:15 But as I understand, the chat GPT also improves.
42:19 If you enter one query and then you enter it again later, it won't give you the same results.
42:28 So technically, if we follow that, if I enter the same, I won't be able to track the student's submission.
42:39 So here, you will look at his portfolio, his work portfolio to check if,
42:47 "Wait, why did his output suddenly change now when I know that this is the quality of work he is submitting?"
42:56 So it's a little hard to detect.
42:59 So you need a reliable benchmark.
43:01 For example, at the beginning of the semester, this is the output.
43:05 You know that this is his work because you saw it in his classroom.
43:10 And then later on, there's an assignment that was submitted.
43:13 The style is a huge improvement or the vocabulary words used are different.
43:19 Is that so?
43:20 You really have to just trust your judgment on it.
43:25 In the discussion, what I do is I ask them to report about their outputs also.
43:34 So of course, if you ask them to elaborate, they should discuss it fully.
43:45 But because of my context, investigative reporting, because there's an impetus on getting original information,
43:55 it's a little hard for them to use AI if they have to summarize or rely on an AI tool to do that part of the report.
44:10 They would have to interview people for that or analyze data for that.
44:16 Okay, my final question, Carol.
44:19 Are you more optimistic than pessimistic about AI?
44:23 Because there are a lot of scenarios and opinions for the future.
44:27 Some are saying that even technology leaders are issuing warnings that if we're not careful, AI will take over the world.
44:34 It's going to take control of institutions.
44:37 There will be super intelligence in the near future where they're going to disregard human judgment and decision-making and make decisions on their own.
44:47 On the other hand, some are saying that AI will save humanity.
44:53 It's going to enable us to solve problems, etc.
44:58 Where do you stand on this?
45:00 Usually, the conversation is like this.
45:02 Either doomsday or techno-solutionism.
45:05 This is the answer.
45:07 But I think if we look at history, when it comes to print and then internet, the reality is somewhere in between.
45:15 So for me, I am cautiously optimistic because I've been using the tools and I've experienced the advantages.
45:26 I hope others can also practice that so that we're able to harness AI to improve our reporting.
45:37 That's why it's called co-opting AI.
45:39 However, when it comes to alignment, that's what makes me nervous.
45:46 This is what the general AI is talking about. We have depictions that it's capable of reasoning and all.
45:58 We will also monitor this because a lot of these things are happening behind the scenes.
46:03 That's the use of algorithms. So maybe the connection is that we also investigate and we also seek accountability in these AI systems.
46:19 So in that regard, if we're going to use it for purposes to sway elections, for instance,
46:26 we need to raise awareness in that regard because definitely that's not how we want AI.
46:33 We don't want AI to have that kind of impact on us.
46:38 Okay, so you are cautiously optimistic and the keyword there, of course, is cautious.
46:44 Good advice. So we want to thank you on that note, Carol.
46:48 Thank you for sharing and for your valuable research on this transformative technology.
46:54 We're looking forward to your book.
46:56 Professor Carol Ilagan, journalist, researcher, and faculty member at UP Diliman.
47:01 Thank you very much and long live!
47:05 Thank you, Sir Howie. Thank you very much.
47:08 Hi, I'm Howie Severino. Check out the Howie Severino Podcast.
47:13 New episodes will stream every Thursday.
47:15 Listen for free on Spotify, Apple Podcasts, Google Podcasts, and other platforms.
47:20 (music)