In 2021, several intergovernmental initiatives made important steps toward taking control over artificial intelligence’s (AI) impact on individuals and society.
Council of Europe (CoE) is moving toward a convention, the European Commission proposed a comprehensive regulatory package, UNESCO adopted its recommendations on the ethics of AI. Much important work was also done by other intergovernmental organizations, such as OECD, World Bank, and initiatives such as GPAI, to name only a few.
The Council of Europe’s Approach to AI Regulation
The Ad Hoc Committee on AI at the CoE (CAHAI) recently adopted the potential elements of a legal framework for the development, design, and application of artificial intelligence, based on human rights, democracy, and the rule of law standards. This was based on a previous Feasibility Study, adopted by CAHAI at the end of 2020, which established a need for a legal framework consisting of a combination of binding and non-binding, horizontal and vertical, sectoral instruments, multi-stakeholder consultations, and subsequent intensive work by expert and diplomat-based working groups.
Many CoE past conventions have already confirmed their effectiveness in regulating other types of technology and their impact on human rights (such as the Budapest Convention on Cybercrime, Oviedo Convention on Human Rights and Biomedicine, or Convention 108 on Automatic Processing of Personal Data, especially in combination with its additional protocols and its “granddaughter,” the European Union’s GDPR). They can also be acceded to by non-member countries, which is especially relevant for technologies with trans-border effects and thus have a tendency of becoming global standards.
The Need for AI Regulation
Why do we even need regulation, why do we need to go beyond ethics and recommendations, and what are some of the priorities?
Regulation does not inhibit innovation. As it happens, it just might be easier to prove that it is lack of regulation that is currently inhibiting innovation by entrenching the existing “monopolies,” while simultaneously stimulating optimization of authoritarian and inegalitarian tendencies. In fact, regulation helps to move faster when the field starts to have rules.
Over a century ago, there was competition in the development of electric and gas-powered vehicles. But it was not regulation that stifled that innovation. The market chose gas for business reasons, partially also linked to the existing monopolies of that time. This brought consequences, and it is now coming back full circle. Climate change and civil society have influenced governments to impose rules, and the market is now choosing to readapt by moving towards electric vehicles.
We cannot afford another century to arrive at similar conclusions with AI. We need smart regulation with coherent, comprehensive, and systemic solutions to avoid unintended, and sometimes also intended consequences.
Conclusion
The impact of AI is not neutral, and its risks cannot be ignored. Effective regulation is necessary to ensure that AI aligns with human rights, democracy, and the rule of law. Transparency, accountability, and risk mitigation must be prioritized. Regulation should not hinder innovation but rather create clear, enabling rules that balance the needs of developers with societal expectations. Instead of racing to be the first, regulators should focus on building robust and effective frameworks that protect fundamental rights while fostering technological progress.