The government has been trying to apply artificial intelligence to combat the opioid crisis, with projects like real-time mapping of the signs of illegal drug use, and prescription monitoring programs to identify high rates of opioid prescriptions and incidents of “doctor shopping” for the drugs.
Officials from various Federal agencies are focusing on data-driven approaches, utilizing blockchain, and business-focused functions to brace for a digital future.
Rep. William Hurd, R-Texas, stressed at IBM’s Think Gov event today that America needs to lead the world in developing 5G wireless networks and artificial intelligence (AI) capabilities, especially with China on the rise as a voracious international competitor, and said successful development and application of the two technologies are inextricably linked.
A bipartisan AI caucus was announced today, led by Sens. Martin Heinrich, D-N.M., and Rob Portman, R-Ohio.
Artificial intelligence (AI), following on the heels of its older sibling RPA (robotic process automation), is no longer waiting to be born, but remains more of a toddler on the Federal IT scene–still learning to walk before trying to run, but bulking up from an appetite for serious Federal government tech interest and investment.
Representatives from the Defense Department (DoD) presented the department’s artificial intelligence (AI) program initiatives and need for collaboration with the private sector to accomplish them in a hearing before the Senate Armed Services Committee today.
Federal and private-sector officials discussed the benefits of integrating artificial intelligence (AI) technologies into Federal agency operations during a panel discussion at this weekend’s Dell Experience at the South by Southwest (SXSW) Conference in Austin, Texas, as well as ethical and technical challenges that come with developing AI applications.
In response to the Trump administration’s American AI Initiative, Intel released its recommendations for a national strategy for artificial intelligence that hashes out the goals and actions needed to advance the AI industry.
The Department of Defense’s many artificial intelligence programs–currently over 600 and counting–generally share one stated goal: the blend of humans and machines working as a team, in which AI systems become “partners in problem-solving,” as opposed to our new overlords. Whether in jobs such as cybersecurity, analyzing reams of data, images, and video, operating swarms of drones, or disaster assistance, the idea is to have AI and machine learning systems that augment and improve what personnel can do.
Not unlike Michael Jordan in the 1997 NBA Finals–battling illness and requiring cold medicine–General Services Administration (GSA) CIO David Shive today spoke about how implementing artificial intelligence (AI) technologies has reduced costs for GSA, and how the agency has made efforts to better engage with start-ups.
Officials from the intelligence community on Thursday discussed the importance of using artificial intelligence to process and analyze the large amount of collected data by agencies, thus freeing analysts to shift their efforts toward looking into anomalies identified by AI.
Reps. Brenda Lawrence, D-Mich., and Ro Khanna, D-Calif., today introduced a new resolution supporting “the development of guidelines for the ethical development of artificial intelligence (AI).”
Most private and public sector CIOs have, are, or will be increasing their spending on cybersecurity and automation software deployments, according to a survey released today by Grant Thornton and the TBM Council.
The Department of Defense’s Artificial Intelligence Strategy puts the DoD on more of a fast track toward developing and employing AI and machine learning to support, as the strategy’s preface states, “a force fit for our time.” The strategy outlines an accelerated, collaborative approach with industry, academia, and allies toward new technologies that will “transform […]
The Intelligence Advanced Research Projects Activity (IARPA) announced it will host a Proposers’ Day on Feb. 26 for its Secure, Assured, Intelligent Learning Systems (SAILS) program, and its Trojans in Artificial Intelligence (TrojAI) program.
Panelists at a Brookings Institution event last week agreed that it’s more important to implement artificial intelligence (AI) and other “smart cities” technologies in a secure and responsible manner, rather than merely to win the race to be the first tech adopters on the block.
It turns out a little knowledge can indeed be a dangerous thing.
That’s something the Army Research Laboratory (ARL) discovered when testing the value of artificial intelligence (AI) as an aid to battlefield decision-making. Researchers from ARL and the University of California, Santa Barbara, found in a series of test scenarios that people trust their own judgement more than they trust an AI’s advice. This was true even when an AI agent provided perfect guidance, and when ignoring that advice led to negative results. People might trust an AI personal assistant to recommend a movie or the best way to drive to the theater, but not so much when they have skin–or their own skin–in the game.
In late January, the Defense Advanced Research Projects Agency (DARPA) issued a solicitation to invite submissions of innovative basic or applied research concepts in neurotechnology.
The Department of Defense released a summary of the DoD AI Strategy today that sets goals to support military personnel and protect the country, with the Joint Artificial Intelligence Center (JAIC) leading the effort.
A new report from the Center for Long-Term Cybersecurity (CLTC) at the University of California-Berkeley recommends that national governments use their own spending on development and implementation of artificial intelligence (AI) technologies to shape best practices that will help govern activity in the AI arena as use of the technology becomes widespread.
President Trump’s executive order on artificial intelligence (AI) announced by the White House today focuses on prioritizing Federal government investments in AI-driven projects, and development by Federal agencies of research and development budgets for AI that will support their core missions.
Government and policy-makers shouldn’t put up unnecessary barriers to deploying artificial intelligence (AI) over concern of any perceived risks associated with the technology.
Top officials from communications industry trade groups told members of the Senate Commerce, Science, and Transportation Committee today what few, if any, in the hearing room would disagree with: the United States needs to win the global race to leadership in 5G communications services and technologies.
Department of Defense (DoD) and private sector leaders gathered to discuss the state of cybersecurity in the U.S. military during a Tuesday Federal Executive Forum webinar.
Google urged governments to avoid excessive regulation of artificial intelligence (AI), instead suggesting “international standards and norms” in key areas in a white paper released last week.
Maria Roat, chief information officer at the Small Business Administration, provided a run-down of her office’s extensive to-do list during a keynote address on Thursday at the Veritas Public Sector Vision Day event, and emphasized the importance of laying the proper data-management policy groundwork before embarking on cloud deployments and forays into artificial intelligence (AI) technologies.
From a cybersecurity perspective, the strengths of artificial intelligence (AI) and machine learning (ML) are also weaknesses. The capacity to crunch massive amounts of data, identify patterns, and learn while working covers a lot of territory, but also leaves room for vulnerabilities, which Pentagon and Intelligence Community (IC) researchers want to close up. And the job doesn’t look easy.
The Department of Defense (DoD) has put a lot of emphasis on speeding up the acquisition and development of new technologies out of a need to keep pace with new advances and potential adversaries. But a new report from the Government Accountability Office (GAO)–evaluating the Army Futures Command modernization effort–throws in a word of caution, saying there is such a thing as going too fast.
When the Department of Defense considers its sweeping plans for artificial intelligence, the biggest challenges might not be with the technology itself. It just might be with the boots on the ground, even if the ground is inside a data center.
In a Department of Health and Human Services (HHS) blog post, released Thursday, the HHS Office of the Chief Technology Officer (CTO) and Presidential Innovation Fellows (PIF) announced that they have completed a 14-week pilot program that sought to “improve clinical trials, experimental therapies, and data-driven solutions” for challenges stemming from cancer, Lyme disease, and other tick-borne diseases using Federal data and artificial intelligence (AI).