In response to the Trump administration’s American AI Initiative, Intel released its recommendations for a national strategy for artificial intelligence that hashes out the goals and actions needed to advance the AI industry.

Intel’s plan consolidated the details into four key pillars:

  • Invest in research and development to foster innovation;
  • Create new employment opportunities and protect people’s welfare;
  • Accelerate the development of AI systems through responsible data liberation, and;
  • Enable development and implementation by removing legal and policy barriers.

The first steps the U.S. should take, Intel writes, is to launch a study to determine where to invest in research and development for AI advancement, like cyber-defense, robotics, or data analytics, and for societal initiatives, like in addressing climate change, sustainability, and education.

From there, Intel doesn’t shy away from maximizing the use of AI capabilities in government. The document states that all levels of government should adopt AI systems while also encouraging the private development and use of AI. It strongly supports U.S. involvement in international cooperation and standard-setting of AI systems, and hints that the AI economy would grow best under the conditions of public and private sector collaboration.

Underlying the mass-expansion of AI utilization is Intel’s suggestion to take a light-touch approach to regulating the AI industry.

“Regulating individual algorithms would limit innovation and make it difficult for industry to make use of and innovate in AI,” the document states.

Another integral part of Intel’s strategy is its desire to significantly expand the accessibility of data, since AI capabilities grow more with more data available.

“Government incentives to increase willingness and comfort with sharing information with the public and private sectors will help shift the mentality of data as a product, and encourage data sharing,” Intel writes.

Intel notes that the government needs to develop ethical standards in expanding AI capabilities – namely, increased data access, which needs to be coupled with federal privacy legislation and policies that require accountability for ethical design and implementation to avoid potential harm to individuals and society.

Along the lines of ethics, Intel cites that advancing automation through AI could have consequences on the current American workforce and that the U.S. should bolster unemployment insurance and benefits while individuals who lose jobs can find alternative work, particularly through reskilling training.

Intel otherwise pushes for the U.S. to move full-steam ahead in developing a future workforce that both creates and uses AI – arguing that our national education system should emphasize skills, like critical thinking and complex problem-solving, that are pertinent to the AI industry.

Read More About
Recent
More Topics