The Organisation for Economic Co-operation and Development (OECD) released the first intergovernmental standard on artificial intelligence (AI) on May 22, with its 36 member countries, including the United States, signing off on its principles.

OECD, an international organization that looks to establish international solutions to social, economic, and environmental challenges, published its “Recommendation on Artificial Intelligence to “foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values,” OCED wrote.

The standard contains five pillars that back its goal to promote responsible AI, which are to prioritize:

  • Inclusive growth, sustainable development, and wellbeing;
  • Human-centered values and fairness;
  • Transparency and “explainability”;
  • Robustness, security, and safety; and
  • Accountability

OECD also encouraged five practices for policy-makers to adopt to realize these principles:

  • Invest in AI research and development (R&D);
  • Foster a digital ecosystem for AI;
  • Shape an enabling policy environment for AI;
  • Build human capacity and prepare for labor market transformation; and
  • Cooperate internationally to build trustworthy AI

To ensure that countries make progress toward these goals, OECD also said that participating countries will develop standard metrics to measure AI R&D and deployment. They will also build and evidence base to assess progress in AI implementation.

These rules were established with the assistance of a group of over 50 experts from different fields and sectors – government, industry, trade unions, academia, etc. – after OECD’s Committee on Digital Economy Policy (CDEP) agreed to form the group in May 2018. Throughout four meetings between September 2018 and February 2019, the group advised OECD on best AI practices.

CDEP will continue to build on the intergovernmental AI standard and will monitor its implementation progress.

Read More About