EN
/
TH

Ethical and legal perspective and challenges around AI


Darya Korolenko
Darya Korolenko
Јун. 05, 2022 | 7 minutes to read

In 2021, several intergovernmental initiatives made important steps toward taking control over artificial intelligence's (AI) impact on individuals and society. 

Council of Europe (CoE) is moving toward a convention, the European Commission proposed a comprehensive regulatory package, UNESCO adopted its recommendations on the ethics of AI. Much important work was also done by other intergovernmental organizations, such as OECD, World Bank, and initiatives such as GPAI, to name only a few. 

The Ad Hoc Committee on AI at the CoE (CAHAI) recently adopted the potential elements of a legal framework for the development, design, and application of artificial intelligence, based on human rights, democracy and the rule of law standards. This was based on a previous Feasibility study, adopted by CAHAI at the end of 2020, which established a need for a legal framework consisting of a combination of binding and non-binding, horizontal and vertical, sectoral instruments, multi-stakeholder consultations, and subsequent intensive work by expert and diplomat based working groups. In particular, the final deliverable elaborated the possible elements of a legally binding transversal instrument which is expected to focus on preventing and mitigating risks emanating from applications of AI systems with the potential to interfere with the enjoyment of human rights, the functioning of democracy, and the observance of the rule of law, all the while promoting socially beneficial AI applications. It should be underpinned by a risk-based approach, and its basic principles should apply to all AI systems. The document on potential elements remains restricted while awaiting approval by the Committee of Ministers, the CoE main decision-making body (expected in February 2022), and the negotiations for the expected legal instrument are due to start by May 2022. 

Many CoE past conventions have already confirmed their effectiveness in regulating other types of technology and their impact on human rights (such as Budapest Convention on Cybercrime, Oviedo Convention on Human Rights and Biomedicine, or Convention 108 on Automatic Processing of Personal Data, especially in combination with its additional protocols and its "granddaughter", the European Union's GDPR). They can also be acceded to by non-member countries which is especially relevant for technologies with trans-border effects, and thus have a tendency of becoming global standards. 

Do we need it?

Why do we even need regulation, why do we need to go beyond ethics and recommendations, and what are some of the priorities?

Regulation does not inhibit innovation. As it happens, it just might be easier to prove that it is lack of regulation that is currently inhibiting innovation by entrenching the existing "monopolies," while simultaneously stimulating optimization of authoritarian and inegalitarian tendencies. In fact, regulation helps to move faster when the field starts to have rules.

Over a century ago, there was competition in the development of electric and gas-powered vehicles. But it was not regulation that stifled that innovation. The market chose gas for business reasons, partially also linked to the existing monopolies of that time. This brought consequences, and it is now coming back full circle. Climate change and civil society have influenced governments to impose rules, and the market is now choosing to readapt by moving towards electric vehicles.

We cannot afford another century to arrive at similar conclusions with AI. We need smart regulation with coherent, comprehensive, and systemic solutions to avoid unintended, and sometimes also intended consequences.

It might be true that technology itself is neutral. Its impact, however, never is. Whether it is ethical or not, compliant with the existing and expected rules of the society, or disruptive, that quality does not depend on technology. It rather depends on its position and uses within the ecosystem. And primarily on its designers, developers, deployers, implementors. How they are motivated, how they are supervised, and, ultimately, how they are sanctioned if need be. That requires certainty and enforceability. And that means regulation.

We must focus on realistic problems, not fiction. Discussions about various hypothetical risks of AI can often function as a distraction. Most challenges remain on a very human level and are still not adequately solved. 

We should not be techno-solutionist and allow additional pseudoscience to creep into governance. We should not be naïve in expecting too much from AI's capabilities, or even create new, scaled-up inequalities by failing to ask the right questions, use AI as an alibi or deflection from personal responsibility in decision making, or accept it as fait accompli that cannot be avoided or changed.

We need transparency, especially on what is used, where, how, and for what purpose.

We do not need explainability for all types of applications. But we do need better disclosure and understanding of the capabilities and limits of particular uses.

We do need auditability and accountability, especially if we are sincere about our desires to increase the implementation of solutions and their quality. 

We need to prevent and mitigate risks and avoid strengthening some of the existing trends. Effective compliance mechanisms and standards must be ensured through independent and impartial supervisory authorities. Adequate risk classification and impact assessment mechanisms are necessary and must be consistently, systematically, and regularly applied throughout the lifecycle of applications. They also need to be proportionate to the nature of the risk they pose, and carefully balanced with the abilities of the developers and expectations of the society. Too high compliance burdens can, for example, provide an advantage to larger, established actors, or stimulate avoidance and further obfuscation.

And, finally, regulators should stop emphasizing their desire to be first. We should rather strive to create rules that are clear, effective, robust, and, most importantly, enabling – both for designers and developers, as for our fundamental rights and values.

Darya Korolenko
Darya Korolenko
Јун. 05, 2022 | 7 minutes to read
share article:

See also:

Contact us