New advisory for intermediaries rolling out #artificialintelligence products in India demands explicit government approval. Will it be the death knell for #AI in #India?
Payaswini Upadhyay in conversation with Trilegal's Rahul Matthan.
Read: https://bit.ly/3TkR2zn
Payaswini Upadhyay in conversation with Trilegal's Rahul Matthan.
Read: https://bit.ly/3TkR2zn
Category
📺
TVTranscript
00:00 [MUSIC PLAYING]
00:03 Hi there.
00:12 You're with NDTV Profit, and this is The Fine Print.
00:16 The government of India has a new advisory
00:18 for intermediaries rolling out artificial intelligence
00:22 products in the Indian market.
00:24 The advisory has been issued under the Information Technology
00:28 Rules.
00:28 They say that platforms must ensure
00:31 that users don't upload or share unlawful content while using AI
00:36 products.
00:37 Intermediaries or platforms have to ensure
00:40 that their products, including AI models,
00:42 don't result in biased results that
00:45 may threaten the integrity of India's electoral process.
00:49 Explicit government approval will
00:51 be needed by platforms to use under-testing or unreliable AI
00:56 models, and such under-tested models
00:59 will need to carry a disclaimer that the output can
01:02 be reliable.
01:03 Now, even as the stakeholders in the market and the media
01:06 was digesting this advisory, Rajiv Chandrasekhar
01:10 has just tweeted out saying that the advisory is
01:13 aimed at significant platforms and permission seeking
01:16 from the ministry is only for large platforms
01:19 and will not apply to startups.
01:22 He's also explained saying that the advisory is
01:25 aimed at untested AI platforms from deploying on Indian
01:29 internet, and the process of seeking permission, labeling,
01:32 and consent-based disclosure to users about untested platforms
01:37 is insurance policy for platforms who can otherwise
01:40 be sued by consumers.
01:42 Now, what does this all mean for the internet, the innovation
01:48 in the AI space?
01:49 Where is all this coming from?
01:51 Can the government be faulted for wanting AI models
01:54 to be more responsible?
01:56 And is this the best way to go about it?
01:58 To answer all this and more, I'm joined by Rahul Matan,
02:01 partner at TriLegal.
02:03 Rahul, welcome to The Fine Print.
02:05 First, let's set the sort of background
02:08 or explain the background to all of this.
02:11 Can you help us understand what's
02:13 happened in the recent past?
02:15 I'm sure most of us who've been plugged
02:17 into the news in this space know where all this is emanating
02:22 from, but just for the viewers who don't know,
02:25 can you give us the context of where this advisory
02:27 apprehension is coming from?
02:30 Oh, I don't for sure know exactly
02:33 where this is coming from, because as you
02:35 read through the Twitter space, you
02:38 hear all sorts of comments as to what exactly
02:42 is the reason for this.
02:44 I can hazard a guess.
02:45 And I think when Google launched Gemini 1.5,
02:52 it was supposed to be this great large context window model.
02:56 But after a couple of days, it started spewing out perhaps
03:01 things that were inaccurate.
03:04 The images were very, very diverse,
03:08 perhaps to counterbalance some of the diversity challenges
03:12 that the earlier models had.
03:15 And right now, it was being quite silly
03:18 in the extent to which it was incorporating diversity
03:20 in the images.
03:21 And then some of the questions that
03:23 were asked to the model yielded some answers that were perhaps
03:29 not the sorts of answers that one
03:32 would want to hear from a model.
03:33 And perhaps it is all of this that
03:36 has provoked this sort of reaction from the government.
03:41 There are other people who have been talking also
03:45 on the Twitterverse or on X about some local models that
03:51 have been built by Indian startups
03:53 and some concerns around the fact that those models may not
03:56 be fully tested.
03:58 Clearly, that, I think, is sort of the reason for this.
04:02 And I'm sure in the course of this conversation,
04:04 we're going to get into whether this is the right thing or not.
04:07 So I'll stop here.
04:08 But just to say that artificial intelligence as it currently
04:12 stands today, which is large language models,
04:14 are fundamentally probabilistic technologies.
04:18 What they will do is they will scan
04:21 through a vast context of information
04:25 and try and find answers that are
04:28 appropriate in the context.
04:30 But when they try and find the answers,
04:32 this is not some oracular technology that
04:35 is finding perfect solutions.
04:37 These are probabilistic solutions.
04:39 These are technologies that are trying
04:42 to find the statement which has the highest
04:47 probability of aligning with what the request is.
04:50 And in that, it is fundamentally going
04:53 to be flawed regardless of how good people make them.
04:59 Yes, Rahul, you always batted for innovation.
05:01 But innovation can't sort of happen
05:05 without any oversight, without any regulation entirely.
05:09 So let me come to you with this question.
05:11 We're staring at the national election.
05:12 The timing of all of this is, let's say,
05:16 very crucial for the electoral process that's
05:21 about to happen in our country.
05:23 In that context, is the government
05:25 not sort of in the right to say that, look,
05:29 you're ruling out these models.
05:31 You're testing it on the public, not probably labeling it
05:35 to the extent that you should?
05:37 Because while people might tinker and experiment
05:40 with this technology, the responses that it throws up
05:43 has power to influence ordinary users and consumers.
05:48 With that background, what do you
05:51 think the government's stance should be?
05:53 Should they let this innovation run wild?
05:56 Or there are ways or better ways to make
06:01 the platforms, the AI models, the users more responsible?
06:07 Yeah, look, I think--
06:10 your technology is going to keep changing.
06:13 There's always going to be new risks
06:15 that technology brings up.
06:18 And we're always going to have elections.
06:19 Every five years or every 2 and 1/2 years,
06:21 we're going to have some sort of election.
06:23 There's going to be some sort of reason why we should not
06:27 want the latest version of technology to be unleashed.
06:31 But yes, I'll say this, Payaseni.
06:33 We've always had people who have used new technologies
06:38 to find new ways to subvert information.
06:41 I would say ever since Julius Caesar's time,
06:44 at Julius Caesar's time, they had a newspaper.
06:46 And there was fake news in the newspaper.
06:48 And then well before television, there
06:51 was fake news in print journalism.
06:54 And then there was fake news in radio.
06:56 And then there was fake news in visual media.
06:59 Now we've got really believable fake news--
07:01 [INTERPOSING VOICES]
07:04 Sorry?
07:05 The extent to reach of it, which is sort of bothersome,
07:09 which requires quicker projecting than not.
07:13 Every Payaseni in every decade or so
07:17 was asking every Rahul Matan asked the same question
07:20 and said that, look, this new technology has greater reach.
07:23 I will give you this in writing that the next technology that
07:26 comes will have even greater reach than this technology.
07:29 So the point I'm trying to make is not this.
07:31 There is always a pressure that technology
07:35 will bring to bear on the existing way of doing things.
07:41 And we can certainly call for a pause during this election
07:45 season because this is a general election.
07:47 And maybe we think that this is super important.
07:49 All I would say is that technology
07:51 is galloping at a frenetic pace.
07:53 And if we stop innovation for even three months
07:57 that it's going to take for us to get past this election,
08:00 we will, I'm afraid, be just completely out of the race
08:03 when we resume.
08:04 So that's not the solution in my mind.
08:07 I think that no matter what we do,
08:09 there will be people who will find ways of using technology
08:12 that don't perhaps fall directly with them.
08:14 So just remember that this is aimed at intermediaries.
08:18 As the audible business says, it's
08:20 aimed at the large platforms.
08:22 Now, that's not to stop small platforms from doing this.
08:25 This is now very easily accessible
08:28 general purpose technology.
08:29 So we may be chopping the head off
08:32 and finding that it's the tail that carries the sting.
08:34 This is not the way to deal with this.
08:36 I think with every revolutionary technology,
08:40 there is going to be a time when society
08:43 will struggle to come to terms with these new changes.
08:47 The example that I often give is photographs.
08:49 When photography was first invented,
08:51 we believed photographs as the truth
08:54 because this was a mechanical representation of what
08:57 the eye could see that could not be tampered with.
08:59 And then, of course, Photoshop came
09:00 and we don't believe any photograph
09:02 because today it's possible to tamper
09:04 with absolutely any photograph.
09:05 So that's no longer evidence of the truth.
09:08 And what's happening right now is with digital video
09:11 and with digital audio, that exactly the same Photoshop
09:15 moment is happening.
09:16 And so we are going to have to struggle as a society
09:20 to come to terms with this.
09:21 There's going to be no shortcut.
09:23 I'm quite sure that the government is well-intentioned
09:25 in putting this out.
09:27 I'm afraid that the government hasn't
09:29 considered the consequences because this
09:32 is an important technology for our country.
09:34 AI has tremendous use cases for development
09:37 and we've got to allow innovation to proceed apace.
09:41 I know there's going to be difficulties.
09:43 We've got to find new ways to deal with it.
09:45 OK, has anybody found an answer or the balance
09:49 in terms of, let's say, other jurisdictions
09:53 to deal with it in a fashion that allows innovation
09:56 to thrive, yet we don't have the ramifications
09:59 like we saw in Gemini's case?
10:03 Is there a right answer or a balance to your mind?
10:06 Look, I think what happened with Gemini
10:08 is actually a good lesson that we could apply.
10:12 The moment Gemini came out and the results
10:16 were not so OK, this, of course, was
10:21 social media and various people posting
10:23 that these were the problems.
10:24 But what was the result of that?
10:26 Google paused the use of Gemini for generating images.
10:30 And that's sort of the reaction we need.
10:33 If these technologies in the initial launch
10:38 don't perform as society wants them to perform,
10:41 leave aside what the technology companies want them to perform,
10:44 but as society wants them to perform,
10:46 we need to be able to very quickly pull back
10:50 those models, make the changes, and release
10:53 better updated, more advanced models.
10:55 And in that iterative process, we will get it right.
10:58 So I think the point really is not
11:00 to try and require companies to take preapprovals,
11:04 because we don't even know what they're preapproving for.
11:07 These are such probabilistic models,
11:08 and we're tending away from narrow models
11:12 to general purpose models.
11:14 The goal is general intelligence.
11:16 And in general intelligence, there
11:18 is no way to fully test all aspects of it,
11:21 because if there was, it wouldn't
11:22 be general intelligence.
11:23 You would be testing within a very narrow parameter
11:26 of things.
11:27 So this preapproval for AI is a doomed approach, in my view.
11:32 I think the best thing we can do is constantly and rigorously
11:35 test the models before they are put out into the wild.
11:40 And perhaps the government can insist that be done.
11:43 And then recognize that even after this testing,
11:47 that when the model comes out into the wild,
11:49 it will still have some faults.
11:51 And so have mechanisms so that we can, in a very rapid way,
11:55 respond to these changes so that we can pull them back
11:58 if required.
11:59 Coming back to what the solution is,
12:01 if I can sort of simplify what you're saying,
12:04 your sort of opinion is that we should course correct.
12:09 Have a quick fix, let's say, if the output from these AI models
12:15 is biased or harmful or unlawful,
12:19 if I can sort of use these broad terms,
12:22 rather than have, again, a license regime before you roll
12:27 out any of these models.
12:28 Is that what you're saying, that don't have prior permissions,
12:31 but have a sort of architecture or best practice
12:37 to quickly fix or take down once the company notices
12:42 an unlawful result?
12:45 Look, I mean, fix, not take down.
12:46 I think, look, if there's a model,
12:48 these models are too large to fix.
12:51 But they certainly are--
12:52 it is possible for us to change the way in which they've
12:55 been tuned, such that the output that they give
12:57 is more aligned to what we need.
13:00 So I would really say--
13:03 I wrote about this in a piece last week.
13:06 Essentially, the solution has got
13:07 to be having the workforce and the government
13:11 to constantly monitor these things that are coming out
13:14 and, when required, reach back to the companies to say,
13:18 look, everything is fine.
13:19 It's just in this area, it's giving
13:21 results that are not OK.
13:22 Can you tweak it such that it comes back online?
13:27 And I think that's sort of the way to do it.
13:29 I don't think we in the country--
13:30 Now, there's a danger of then the government dictating
13:34 outcomes and sort of answers that are to its liking,
13:39 because then there'll be another set of protests or, rather,
13:42 voices that say, how is government
13:45 regulating free speech?
13:48 Well, you have the same problem right now,
13:49 because the best way to regulate free speech
13:51 is to just muzzle the speech completely.
13:54 So if AI models are going to need preapproval,
13:57 what's the basis on which they get preapproval?
13:59 The government has got to now evaluate all of these things.
14:01 No government officer who knows that once they have given
14:06 approval to a model, and later on that model ends up
14:09 spouting all sorts of gibberish, is
14:11 going to last another day in his post.
14:13 So he's not going to give an approval.
14:15 This is really not a solution to the free speech problem.
14:17 OK, so let me come to this.
14:22 I'm sure after this advisory, you'll have your clients come
14:24 to you and ask you, Rahul, tell us, how do we go about this?
14:28 What do we do as next steps?
14:30 What is your advice to them now, given
14:32 that the minister has tweeted out
14:33 saying that it's for significant platforms
14:35 and permissions will not be required by startups?
14:39 And you two sort of label untested AI models very
14:44 clearly.
14:45 What is the advice to the companies
14:46 that you're going to give at this stage?
14:48 What is it that they are going to the government with?
14:51 Tash, it's really difficult. Because as a lawyer,
14:54 I'm trained to really advise clients
14:57 on the basis of communications from the government.
15:00 There was a time when we only needed
15:01 to advise on laws and regulations
15:03 and formal subsidy regulation.
15:06 But we have expanded that scope to advising them
15:09 on the basis of advisories as well,
15:12 just because that's the nature of technology.
15:15 But I really don't know if I can extend that
15:17 to the point of advising them on the basis of Twitter or X
15:21 posts.
15:22 Because first of all, X itself is an intermediary.
15:26 Who knows whether it's going to be allowed
15:28 to retain this beyond a point, and whether this
15:31 becomes sort of evidence of the government's intentions.
15:38 I would really wait for the advisory to be amended
15:40 and for that clarity to be written into the advisory.
15:44 Right now, the advisory speaks very broadly
15:47 in terms of intermediaries.
15:48 And that's what we'll have to rely on.
15:53 So OK, can I also ask you the legal basis of this advisory?
15:58 Is it sound on that?
15:59 Is that something that you think can be questioned?
16:02 Because it says under the IT rules,
16:05 under the digital guidelines, the intermediary guidelines
16:11 and digital media ethics code, is there a reason for us
16:15 to question the legality of this advisory at this point?
16:20 No, I mean, this has been issued formally
16:22 by the Ministry of Information Technology.
16:24 I can see no reason why this is not something
16:28 that we need to take as something that has officially
16:31 come from the government.
16:33 It, in a logical sense, flows from the intermediary
16:38 guidelines 2021.
16:41 And certainly, I think that if this were to be enforced,
16:45 I don't think that a court is going
16:47 to stand on form over substance and say that, look,
16:52 this is not--
16:53 because it was not formally issued as a regulation,
16:56 you are not to take into account this advice.
17:01 And I don't think any client of mine
17:03 is going to have the courage to say,
17:05 I'm waiting for formal rules that
17:07 are laid before Parliament before I comply with it.
17:09 That's just not the way in which technology regulation works
17:12 today.
17:12 So I would just say that we need to give due legal credence
17:18 to what has been issued by the ministry.
17:21 I would wish that these things are
17:23 issued in a more formal sense, as in through regulations.
17:27 But that's also the nature of technology regulation today.
17:30 Technology moves so quickly that we're
17:33 going to need to have to rely on some of these things.
17:37 But I wouldn't go so far as to rely on a post
17:40 on a social media site.
17:41 I don't think anyone would take comfort
17:45 from something like that.
17:46 OK.
17:47 So in the coming days, what is the clarity, Rahul,
17:49 that you're looking at, given that I'm
17:51 sure a lot of these significant platforms are your clients
17:54 and will come to you for advice?
17:57 What are the unanswered questions
17:58 to your mind at this stage that you'd
18:01 like the ministry to sort of clarify in its advisory?
18:05 I think if it's aimed at large social media intermediaries,
18:10 it should be clarified.
18:11 But quite frankly, I think that any form of pre-approval
18:15 would be the death knell of AI in this country.
18:19 So I think if we're saying that we're
18:21 going to aim this only at the large social media
18:24 intermediaries, then, look, a lot of the innovation
18:27 is happening there.
18:28 They are the ones that have the capabilities
18:31 to invest large amounts of money in building these big models.
18:35 And those models, of course, they
18:37 can't be doing horrible things.
18:39 But the things that they do that are useful to us,
18:42 we can't have that denied us just
18:46 because at one end of the spectrum, it's behaving badly.
18:50 There are a lot of AI startups in the country
18:53 that are really depending on some of these models
18:56 to do a lot of things.
18:57 And if out of fear, those things are taken away
19:01 from the Indian market, then that would be--
19:03 I take your principle objection, but I'm still
19:06 keen to know what is it at this point
19:09 that these significant platforms will be doing?
19:11 Are they going to engage with the government at this stage?
19:13 Are they going to ask for more clarity
19:15 on what is the specific nature of permission that they require?
19:19 Give me the roadmap of the next few days or a week
19:22 as you advise your clients to this advisory.
19:29 Yeah, I think all of the above.
19:31 I think certainly for existing models,
19:33 they're going to need clarity as to what they should be doing,
19:36 because a lot of models are already out and available
19:39 in India.
19:41 Many of them are available through these large social
19:43 media intermediaries, and the question is, what do they do?
19:47 I think there would also be some effort
19:51 to rationalize this advisory in the context of all
19:54 the feedback that has come over the last few days.
19:57 And hopefully, there would be an engagement with the government
19:59 as far as that is concerned.
20:01 But yeah, I think if there's no further clarity on this,
20:04 or if at least to some extent it's not unwound,
20:07 we may even see the more cautious social media
20:10 intermediaries start to pull out their models from India.
20:14 Because until they get the permission,
20:17 every day that they continue to operate in India
20:19 could be a risk for them.
20:23 Yeah, and then there are the perils of India
20:25 as a country not being represented in the data
20:28 that these AI models are gathering.
20:32 Oh, that's a whole different problem, Vyasu.
20:35 Of course, we have to deal with.
20:36 But we've got to first allow AI to work in the country
20:39 before we can train it on the context for India.
20:42 So that really is a [INAUDIBLE]
20:43 I'm just saying that if they stop inputting India data
20:49 and making it available to the larger Indian consumer,
20:53 those have their own set of challenges
20:56 in terms of how included we are in the context of how
21:01 these AI models are developing.
21:03 Rahul, many thanks for giving us your first take
21:06 on this advisory.
21:07 And I'm sure as in the days to come,
21:10 the engagement that hopefully will
21:12 happen between the government and these large significant
21:15 platforms will give us better clarity on what
21:18 is it that the government is expecting the AI models to do.
21:23 Thank you for watching us here on NDTV Profit.
21:26 Have a good day.
21:28 [MUSIC PLAYING]
21:31 [music]