NEWS

View Original

According to an advisory report from the government, AI models in India will need MeitY approval

AI companies must now include labels warning users about unreliable results, as requested by the MeitY advisory.

According to reports, India's Ministry of Electronics and Information Technology (MeitY) issued an advisory stating that any generative AI or artificial intelligence (AI) model that is unreliable in any way or any stage of testing must obtain the "explicit permission of the government of India" before being deployed in the country. This warning was issued in the days following reports by certain users that Google's Gemini AI chatbot was providing false and deceptive information about the nation's prime minister.

The Economic Times reported that the advisory was released on March 1 and that businesses were expected to abide by it. To guarantee that "their computer resources do not permit any bias or discrimination or threaten the integrity of the electoral process," the advisory requested companies that have already implemented AI platforms in the nation. Furthermore, it has been reported that MeitY has requested that metadata be added to the AI platforms in case the content produced by the AI is used to propagate false information or produce deepfakes.

Additionally, businesses were asked to include clear disclaimers if the platform behaved unreliably or produced false information. According to the report, platforms will also need to alert users not to use AI to produce deepfakes or any other content that might influence elections. The advice indicates that this is how AI regulation will develop in India even though it is not legally enforceable.

When some users shared screenshots of Google Gemini displaying false information about Prime Minister Narendra Modi, the problem of unreliability initially surfaced. In response to comments made on X (formerly known as Twitter) on February 23, Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar stated, "These are violations of several provisions of the Criminal code as well as direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act."

When some users shared screenshots of Google Gemini displaying false information about Prime Minister Narendra Modi, the problem of unreliability initially surfaced. In response to comments made on X (formerly known as Twitter) on February 23, Union Minister of State for Electronics and Information Technology Rajeev Chandrasekhar stated, "These are violations of several provisions of the Criminal code as well as direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act."

According to KissanAI founder Pratik Desai, "I was such a fool thinking I will work bringing GenAI to Indian Agriculture from SF," he continued. We were thrilled to be developing a multimodal, low-cost pest and disease model. After working four years straight to bring artificial intelligence to this field in India, this is awful and demoralizing.

In response to the criticism, Chandrasekhar made it clear in a series of posts that the advisory was given in light of the country's current legal framework, which forbids platforms from producing or facilitating illegal content. "[..]platforms are subject to explicit legal requirements under both criminal and IT law. Therefore, the best way to protect yourself is to use labeling and explicit consent. If you run a large platform, you should also get government approval before deploying error-prone platforms, he continued.

The Union Minister went on to say that only "large platforms" will need to request permission from MeitY and that the advice is intended for "significant platforms." Startups are not covered by this advice. He went on to say that it is in the best interests of the companies to heed the advice because it insures against users who might otherwise use the platform. "Govt, users, and platforms all share the common goal of ensuring safety and trust on India's internet," he stated.