CHENNAI: Efforts to promote indigenous AI face governance and accountability challenges.

Chennai: Even as the country rapidly scales up investments in domestic AI capabilities, India’s endeavor to build its own sovereign Artificial Intelligence systems is set to face complex governance, legal, and accountability challenges. This observation was highlighted in a policy white paper released by the Office of the Principal Scientific Adviser to the Government of India. Titled ‘Advancing Indigenous Foundation Models,’ the document emphasizes that while India has begun fostering an ecosystem for domestically developed foundation models, the rapid expansion of Generative AI has raised difficult questions regarding regulation, copyright, model accountability, and the governance of synthetic content. Foundation models—large AI systems trained on massive datasets that power applications ranging from language processing to healthcare diagnostics—are rapidly becoming the backbone of digital economies. However, the paper warns that decisions made during the design and training phases of these models can impact thousands of downstream applications across various sectors.
“Choices made during the model design and training phase can shape performance and risks across numerous downstream uses,” states the paper, underscoring the far-reaching impact these systems have on diverse industries. A key concern raised in the report is the lack of clarity regarding accountability within the AI ​​value chain. In many instances, models are developed by one organization and deployed by another, making it difficult to assign responsibility when AI systems generate biased, erroneous, or harmful outcomes. The report argues that accountability cannot be confined solely to application developers, as upstream design decisions made by model creators often influence the behavior of the deployed systems. Experts suggest that this emerging gap in governance represents one of the most complex policy challenges currently unfolding globally within the realm of AI regulation. “Foundation models are no longer merely research artifacts; they are evolving into critical digital infrastructure. This implies that responsibility cannot rest solely with the final developer in the chain. Governance frameworks must acknowledge the multi-layered nature of AI development,” an AI researcher at IIT Madras—who has worked extensively on large-scale machine learning systems—told DT Next.
This white paper also sheds light on unresolved copyright issues surrounding the data used to train generative AI models. Since these systems are trained on vast repositories of publicly available content, concerns have mounted regarding whether such practices infringe upon intellectual property rights. To address this issue, the government is considering a hybrid framework that would permit AI developers to train models on legally acquired data, while simultaneously mandating the payment of royalties when AI tools are deployed commercially. The regulation of synthetically generated content—including AI-created images, audio, and video—has been identified as another challenge. According to the proposed regulations, such content must bear clear labels and embedded identifiers to prevent misuse and enhance transparency across digital platforms. Industry experts assert that while such measures are essential, they must be implemented with caution to ensure that innovation is not stifled. R. Vishnu Gopalan, a Bengaluru-based artificial intelligence expert, stated, “With the expansion of generative AI, regulation is inevitable; however, the challenge lies in crafting frameworks that safeguard users without slowing down technological advancement.” The report also calls for India-specific benchmarking standards to evaluate the performance, fairness, and reliability of AI systems across India’s diverse languages ​​and social contexts, while cautioning that global benchmarks often fail to capture the realities of India’s linguistic diversity.

Exit mobile version