The Federal Trade Commission (FTC) said this week that its three commissioners unanimously approved a resolution that will give agency staff sharper tools for digging into non-public investigations of products and services that “use or claim to be produced using artificial intelligence (AI) or claim to detect its use.” 

The 3-0 vote by FTC commissioners approves the use of “compulsory process” by FTC staff as they look at AI-based services. The resolution approved by the agency will remain in effect for the next ten years.  

The commission said that will allow FTC staff to issue civil investigative demands (CID), which it said are a “form of compulsory process similar to a subpoena,” in AI-related investigations, while also retaining the agency’s ability to determine when CIDs are issued.  

Currently, the agency issues CIDs “to obtain documents, information and testimony that advance FTC consumer protection and competition investigations,” it said.  

In reporting its approval of the resolution, the FTC took note of both the benefits and challenges presented by AI tech.  

“Although AI, including generative AI, offers many beneficial uses, it can also be used to engage in fraud, deception, infringements on privacy, and other unfair practices, which may violate the FTC Act and other laws,” the agency said.  

“At the same time, AI can raise competition issues in a variety of ways, including if one or just a few companies control the essential inputs or technologies that underpin AI,” the FTC said.  

The FTC is one of several agencies that received specific taskings to look into AI tech as part of the Biden administration’s AI Executive Order issued on Oct. 30.  

Just last week, the agency announced a new?exploratory challenge?aimed at encouraging the development of “breakthrough ideas” to evaluate, monitor, and prevent malicious uses of “voice cloning” technologies enabled by artificial intelligence (AI) technology. The agency said that voice cloning tech can be used for good purposes, but also said it “poses significant risk: families and small businesses can be targeted with fraudulent extortion scams; creative professionals, such as voice artists, can have their voices appropriated in ways that threaten their livelihoods and deceive the public.” 

Read More About
About
John Curran
John Curran
John Curran is MeriTalk's Managing Editor covering the intersection of government and technology.
Tags