Artificial intelligence (AI) governance in government has moved “from concept to practice with remarkable speed,” according to a new practitioner’s playbook, which says public agencies are rapidly adopting rules and structures to harness AI’s benefits while managing a complex and evolving set of risks.

The playbook is from Eric Hysen, who was the first chief AI officer at the Department of Homeland Security (DHS) from 2021 to 2025. Hysen is now an executive fellow in applied technology policy at the University of California, Berkeley’s Goldman School of Public Policy and School of Information, and also senior vice president and chief AI and transformation officer for legal and corporate affairs at Salesforce.

In his Best Practices in Public-Sector AI Governance: A Practitioner’s Playbook, published earlier this month, Hysen writes that governments across the United States are “rapidly adopting artificial intelligence (AI) to improve services – from easing highway congestion and answering tax questions to supporting clinicians and detecting fraud – while facing an evolving and complex risk landscape.”

The findings are based on a comparative review of 66 U.S. public-sector AI governance policies; interviews with 10 federal, state, local, and international leaders; and Hysen’s own experience overseeing AI policy at DHS. At DHS, he led the development of the department’s AI Roadmap, launched generative AI pilots, and established governance practices across more than 160 AI use cases.

Hysen found that governance capacity is expanding quickly but remains uneven.

According to the report, 43 states have established some level of AI governance capability, although “there remains wide variation in scope, structure, and transparency.” Many programs are still lightly staffed, and definitions and oversight models differ significantly from one jurisdiction to another.

Governments are embracing AI to transform operations and public service delivery. Examples cited in the report include reducing highway congestion, improving customer service for tax questions in California, enhancing clinician effectiveness, and detecting payment fraud at the U.S. Department of Veterans Affairs.

At the same time, Hysen notes that AI introduces new risks. Drawing on guidance from the Cybersecurity and Infrastructure Security Agency (CISA), the report identifies three broad categories: attacks using AI, attacks targeting AI systems, and failures in AI design and implementation. Hysen argues that effective governance is necessary to ensure AI is “safe, ethical, secure, and aligned with organizational strategy and values.”

The paper is designed as a practical playbook for public-sector leaders. It organizes best practices into five stages of AI governance: policy development, leadership and resourcing, intake and inventory, risk assessment and management, and publication and engagement.

In the policy development stage, Hysen advises governments to begin with a transformative vision for what AI can accomplish, adopt “minimum viable governance” focused on core concepts, and rely on standard definitions while refreshing policies on a regular but not excessive schedule.

For leadership and resourcing, the report recommends assigning a clear leader regardless of job title, using governance boards for alignment rather than micromanagement, treating governance as a core function, signaling support from top leadership, and building communities of practice to spread expertise.

The intake and inventory stage calls for agencies to distinguish between AI use cases and systems, collect information early and repeatedly, tailor intake requirements to the level of risk, and automate checks through procurement, budgeting, and security processes.

Risk assessment and management should use practical risk tiers, structured impact assessments, and continuous monitoring, while publication and engagement emphasize transparency, stakeholder participation, and ongoing dialogue.

Hysen concludes that there is no single blueprint for organizing AI oversight, but that the governments making the fastest progress share five traits: They “articulate a mission-first vision, designate accountable leadership and provide resourcing, keep the processes lightweight but reliable, use risk assessments to focus and prioritize their efforts, and publish and engage continuously.”

Done well, he writes, AI governance becomes “an engine for responsible innovation, supporting governments in earning public trust and delivering measurable improvements in people’s lives.”

Read More About
Recent
More Topics
About
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags