42 countries adhere to the OECD Principles on Artificial Intelligence
The OECD deals with Artificial Intelligence. The member and partner countries of the OECD (Organisation for Economic Co-operation and Development) officially adopted the first set of intergovernmental principles on artificial intelligence (AI) on 22 May. These states have agreed on international standards to be met to ensure the design of robust, safe, fair and trustworthy AI systems.
A breakthrough in terms of ethics and Artificial Intelligence? Non-legally binding, as stated by the institution itself in its press release, the fact remains that 42 countries acceded to the OECD Principles on Artificial Intelligence on 22 May. Argentina, Brazil, Colombia, Costa Rica, Peru, Romania and Brazil joined the 36 regular OECD member countries (including the United States, Germany and France, for example) to adopt this "charter" at the Annual Meeting of the Council at Ministerial Level, which this year focused on "Digital Transition for Sustainable Development".
The responsible deployment of a reliable AI
With the support of the European Commission, more than 50 experts from various professional backgrounds - administrations, academia, business, civil society, international bodies, the technical community and trade unions - have worked to develop these principles, the stated objective of which is the responsible deployment of a reliable AI that best serves the public interest.
To this end, the organization issued five recommendations applicable to public policies and international cooperation:
- AI should serve the interests of individuals and the planet by promoting inclusive growth, sustainable development and well-being.
- AI systems should be designed to respect the rule of law, human rights, democratic values and diversity, and should be accompanied by appropriate safeguards - allowing for human intervention when necessary, for example - in order to achieve a just and equitable society.
- Transparency and responsible disclosure of information related to AI systems should be ensured, so that individuals know when they interact with such systems and can challenge their results.
- AI systems should be robust, safe and secure throughout their life cycle; potential related risks should be assessed and managed on an ongoing basis.
- Organizations and individuals responsible for developing, deploying or operating AI systems should be responsible for their proper functioning, in accordance with the above principles.
"Artificial intelligence is revolutionising our lifestyles and work patterns and delivering significant benefits to our societies and economies, said Angel Gurría, Secretary-General of the OECD. It is therefore incumbent on governments to ensure that AI systems are designed to respect our values and laws, in order to ensure the safety and privacy of individuals."
However, the OECD does not intend to stop there and is already planning a second step with the development of practical guidelines for these principles by its digital policy experts. And the organisation added in its press release: "The OECD Principles in other policy areas have paved the way for the development of international standards and helped governments to design their national legislation. For example, the OECD Privacy Guidelines, which set limits on the collection and use of personal data, underpin many privacy frameworks and laws in the United States, Europe and Asia.
What about AI in China and Russia?
Far from yet being applied, however, these principles could impose some new rules on AI professionals. With its share of questions to be answered: do these principles hinder research on AI? Do they act as safeguards? And what about China or Russia that are not part of such an alliance on AI?
Download the full version of the OECD Principles on Artificial Intelligence