The Federal Trade Commission (FTC) has released a report that warns against the usage of artificial intelligence (AI) to fight against online problems and relying on it as a policy solution.
The report entitled, Combatting Online Harms Through Innovation, focuses specifically on reporting on how (AI) possesses some flaws including inaccurate, biased, and discriminatory by design in its tools.
“Our report emphasizes that nobody should treat AI as the solution to the spread of harmful online content,” Samuel Levine, Director of the FTC’s Bureau of Consumer Protection, said in the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology—which can be both helpful and dangerous—will take these problems off our hands.”
The report came out after congress passed legislation in 2021 to addresses how AI can be used to remove harmful content such as bots, fake accounts, sexual exploitation and hate crimes from online platforms and websites.
The following are all of the suggestions the report gives.
- Avoiding Over-reliance on (AI) detection tools because they are seen as blunt instruments rather than precision instruments.
- It is always important to keep humans in the loop when implementing (AI).
- Maintaining Transparency on how these tools will be used and accountability that looks to the outcomes and impacts of such usages.
- Responsible data science from those who are putting these tools together.
- Platform AI interventions need more studying.
- User tools could be made available to help individuals avoid harmful or sensitive content on their own.
- Availability and scalability mostly large technology companies are responsible for most AI tools.
- Content authenticity and provenance.
- Legislation is needed around the world to deal with (AI) implementation.