As technology’s importance to companies grows, so does the need for what we call “digital governance:” leveraging technology within governance structures and managing the risks it presents. In a recent paper, we examine how China and India are addressing these two tasks, and we do so by focusing on the impacts of hard and soft law, policy statements, and corporate practice.
By looking at China and India, we highlight a significant but often overlooked perspective on the hottest area of technology – artificial intelligence or AI. While U.S. tech companies have in some ways been at the forefront of the race to develop AI, and while the EU has tried to control that race through regulations that have, in the so-called Brussels effect, become models for some other countries, China has used recent advances to become a surprisingly formidable competitor. At the same time, India has encouraged the use and development of AI to the extent that big U.S. tech companies have been keen to increase their investments in India’s AI industry.
In our paper, we sought to answer three questions:
- What regulatory approaches are China and India adopting to manage AI and data protection within the framework of corporate governance?
- How are companies in China and India incorporating technological expertise into their corporate governance structures, and to what extent is this information being disclosed publicly?
- To what extent do institutional and developmental differences between China and India shape their respective responses to technology-related risks and innovation within corporate governance?
As for the first and second questions, our takeaway is that there is a convergence in corporate governance practices around AI risks. In China, we have seen greater AI adoption, more laws and regulations on AI use, and more cases involving AI issues before the courts than in India. In practice, though, responses at the corporate level in both countries are not very different from what we see in the West – the creation of the position of chief technology officer, the emergence of either board committees to deal with tech risks or risk management committees that include tech risks within their ambit, and the disclosure of how companies are using technology and managing its risks. In China, these responses come as top-down guidance and policy incentives from central and provincial authorities. In India, they are simply the result of market demand.
In answering the third question, we first note a striking point of similarity: Both China and India have national policies to encourage companies to develop and adoption of AI. China adopts a state-driven, pro-innovation regulatory model for AI governance that relies heavily on soft law tools such as guidelines, standards, and strategic plans. The government plays a central role in steering AI development through top-down coordination and policy incentives and guidance, rather than strict legal constraints. While China has a comprehensive legal and regulatory framework related to data protection, cybersecurity, and increasingly AI-specific and algorithm-related rules, its overall approach remains flexible and adaptive, aiming to promote innovation while managing risks as they arise. This model contrasts with the EU’s more precautionary and rights-based regulatory approach.
In contrast, India has many fewer regulations and soft-law mechanisms because of a combination of four factors. First, India is a little behind in AI adoption and so is still in the process of enacting applicable laws. Second, India is attempting a more principles-based, flexible, and self-regulatory model of AI governance. Third, China, as a centralized state with strong power, has a long-standing tradition of using the “invisible hand” of governance – policy documents, administrative guidelines, and strategic plans – to steer market behavior and shape industrial development without relying solely on formal legislation. Fourth, China, as a civil law country, relies heavily on written laws, administrative regulations, and formal guidelines to manage the risks associated with data protection and technology use. This legal tradition emphasizes codified rules and top-down regulation, resulting in a more structured and detailed regulatory framework. In contrast, India, as a common law country, tends to adopt a more principles-based and case-driven approach, with greater reliance on judicial interpretation and regulatory discretion. As a result, China has developed more comprehensive and prescriptive rules in areas like AI governance and data regulation compared with India’s more flexible and evolving model.
Beyond the above questions, our paper addresses other issues, two of which we discuss here. First, ESG concerns play a role in AI regulation, policy, and practice in China and India. In China, central state-owned enterprises (SOEs) are promoting ecological collaboration and ESG practices through AI. In India, these sorts of practices have not emerged in the public or private sectors. However, it is worth noting that the policy paper on AI by NITI Ayog (akin to a planning commission) has the subtitle ‘#AIForAll’ and notes that public sector entities in India should be “leaders in adoption of social AI tools.” This focus in both China and India on SOEs and public sector companies respectively is not surprising considering both are emerging economies with development imperatives. Second, our paper shows that the China and India stories must be accounted for in the broader global AI regulation narrative.
This post comes to us from professors Akshaya Kamalnath at the Australian National University Law School and Lin Lin at the National University of Singapore. It is based on their recent paper, “Corporate Governance, Technology, and the Law – Perspectives from China and India,” available here.