A new report from the Center for Long-Term Cybersecurity (CLTC) at the University of California-Berkeley recommends that national governments use their own spending on development and implementation of artificial intelligence (AI) technologies to shape best practices that will help govern activity in the AI arena as use of the technology becomes widespread.
“Governments have an opportunity to establish standards and best practices while promoting AI development and use, for example by implementing guidelines for government procurement of AI systems, and by adding criteria such as safety, robustness, and ethics to AI R&D funding streams,” the report says in one of five recommendations for policy makers.
“Additionally, establishing processes to support transparent and accountable government funding and use of AI technologies will help prevent misuse throughout public services and protect government actors from the limitations and vulnerabilities of AI tools,” the report says.
Those recommendations track in a similar direction to the Trump administration’s AI executive order issued earlier this week, which focuses on prioritizing Federal government investments in AI-driven projects, and development by Federal agencies of research and development budgets for AI that will support their core missions.
Other recommendations for national governments offered in the CLTC report include:
- Try for global coordination of AI policy early on, while noting that cross-border coordination “will be harder to achieve the longer we wait due to technological and institutional ‘lock-in’”; and
- Hold the tech sector accountable for its role in addressing AI challenges.
“Policymakers have the unique primary responsibility to protect the public interest, and this responsibility carries even greater weight during periods of significant technological transformation. Governments should ensure their citizens have access to the benefits that emerge from AI development and are proactively protected from harms,” the report says.
Many of the report’s key findings come from an analysis of AI policies of ten industrialized nations–the U.S., Canada, China, France, India, Japan, Singapore, South Korea, the United Arab Emirates, and the United Kingdom–and data from a larger group of 27 countries that have articulated AI development plans and policies.
That analysis shows that only half of the government strategies surveyed discuss the need for AI systems that are “robust against cyberattacks,” and that only two mention challenges associated with the rise of disinformation and manipulation online.
The report also points that the U.S. and China–which are often positioned in popular debate as rivals, if not enemies, in the race for AI development–share “many priorities for advancing AI, including international collaboration; transparency and accountability; updating training and educational resources; private-public partnerships and collaboration; creating reliable AI systems; and promoting the responsible and ethical use of AI in the military.”
“It has become clear that AI is a transformative general-purpose technology that will spread across geographies and sectors, resulting in massive potential benefits—and risks—that are difficult or impossible to foresee,” the report says.
“The steps nations take now will shape AI trajectories well into the future, and those governments working to develop thoughtful strategies that incorporate global and multistakeholder coordination will have an advantage in establishing the international AI agenda and creating a more resilient future,” it says.