This blog post is a recap of the State of Play: AI Governance panel discussion that took place Wednesday January 22nd, 2025 and featured Arvind Krishna, CEO of IBM; Arthur Mensch, CEO of Mistral AI; Clara Chappaz, Minister Delegate of AI and Digital Technologies of France; and Abdullah Aswaha, Minister of Communications and Information Technologies of Saudi Arabia.
Artificial Intelligence (AI) is reshaping industries, societies, and economies at an unprecedented pace. Yet, as with every industrial revolution, it risks exacerbating divides. What we once called the digital divide, has the potential to become a “dignity divide” in the intelligent age, according to Aswaha. “In the analog age, we have 110 trillion dollars worth of GDP. Per capita, for every dollar being made in the global south, somebody makes 3.5 to 4 times that amount in the global north” he explained. Aswaha sees today's digital economy as an exclusive club of a few players, concentrated in the global north, who have access to compute, algorithms, and data. These disparities such as these reveal the critical need for inclusive governance systems and investment strategies to ensure AI benefits global society equitably and leaves no one behind.
The panel brought together perspectives from national governments and industry. All four panelists were in consensus that excessive concentration of power and avoiding regulatory capture are potential failures to avoid in the governance of AI. They also expressed belief in AI’s potential to help all boats rise in the digital economy if effective governance and cooperation is enacted. Here are some key insights from the panel:
Regulation as an Accelerant of Innovation
The Role of Regulation on Driving Progress
- According to French Minister Delegate, Clara Chappaz, The EU AI Act exemplifies how regulation can enable faster technological deployment through harmonizing standards across nations. Rather than stifling progress, frameworks like these can help create “self-regulatory architectures”. These architectures hold developers accountable while empowering businesses to innovate with confidence. Further emphasized by Mistral AI CEO, Arthur Mensch, by focusing on regulating specific commercial applications rather than generative AI models themselves, accountability can be ensured, while still providing industry with the necessary freedom to innovate.
The Evaluation Challenge
- The “evaluation challenge” raised by Mensch, is a fundamental issue in AI governance. The challenge refers to ensuring that the software exposed to the end user’s works as intended and adheres to ethical standards. Developing robust evaluation processes could become a cornerstone of a new economy that can foster trust and transparency. As computer scientist Jaron Lanier and political economist Glen Weyl discuss in their 2018 Blueprint for a Better Digital Society, a more flourishing digital economy could include compensating citizens and internet platform users for participating in the evaluation of AI models and other digital services. This possibility is a part of their broader philosophy of “data dignity.” How we measure and validate these new systems will define their societal acceptance and success in the intelligent age.
Fostering Inclusive Innovation
Open-source vs. closed-source AI
- The open-source versus closed source AI debate centers on competing visions of technological governance. While companies like OpenAI maintain tight control over the models’ architecture and training, open-source advocates like Mistral AI argue that transparency and customization are crucial safeguards. The ability to audit, modify, and locally deploy AI systems could distribute power across a broader ecosystem of developers and organizations, potentially preventing the consolidation of AI capabilities among a few major tech companies.
Data Sovereignty and Collaboration
- Canada’s leadership in launching the Global Partnership on AI (GPAI) alongside France in 2020 established a framework for balancing innovation with sovereignty in AI development. In the context of AI, data sovereignty refers to a nation or organization’s control over the collection, processing, and use of data within their jurisdiction. This has become a central question of debate surrounding China-owned TikTok’s operations in the United States. Data governance has evolved from a technical concern to a cornerstone of international cooperation with initiatives like the GPAI providing structured channels for knowledge-sharing while protecting national interests.
Bridging the Digital and Dignity Divide
Digital inclusion in AI development requires both technical and social solutions. While the EU’s General Data Protection Regulation (GDPR) and France’s participatory frameworks demonstrate successful citizen engagement in AI governance, implementation remains uneven globally. The Global South faces particular challenges in AI access and deployment, from limited computational infrastructure to data representation gaps.
Key initiatives for bridging these divides could include:
- Localized AI development hubs that adapt solutions to regional needs while building local expertise.
- Digital literacy programs that enable citizens to meaningfully participate in AI governance.
- Infrastructure investments prioritizing underserved regions, coupled with knowledge transfer programs.
Conclusion
The path forward for AI governance requires bridging multiple divides - not just between the Global North and South, but between technologists and policymakers. While tech leaders often critique policymakers' grasp of AI, the panel highlighted an equally crucial gap: technologists' understanding of policy complexity and the democratic process.
Success in AI governance demands combining technical expertise with policy wisdom. Initiatives like GPAI demonstrate how international collaboration can balance innovation with responsibility, while France's citizen-centric approach shows how to include diverse voices in technical governance. These frameworks help transform AI from a potential source of division into a catalyst for collective progress.
The urgent task ahead is ensuring AI development serves not just technological advancement, but human dignity and shared prosperity. This requires governance structures that bridge technical and policy expertise while ensuring equitable access to AI's benefits across the global economy.