Intel Privacy Proposal Aims at ‘Ethical’ Data Use, AI Development

Intel

Tech giant Intel today unveiled its proposal for a national data privacy law that aims to strengthen personal data privacy rules as a necessary ingredient for wide public acceptance of artificial intelligence and other emerging data technologies down the road.

Intel’s proposal joins those of several other big tech firms, industry trade groups, and members of Congress who are preparing the landscape for when legislators take up national data privacy legislation. The timing of any serious legislative push is uncertain.

“Effective privacy regulation is critical to allow technologies like artificial intelligence to help solve the world’s greatest challenges,” the company said in a summary of its proposed legislation.

“The combination of advances in computing power, memory and analytics create a possibility for technology to make tremendous strides in precision medicine, disease detection, driving assistance, increased productivity, workplace safety, education and more,” it said.

Intel–which is working on developing many of those technologies and is focus on “integrating artificial intelligence capabilities across the global digital infrastructure”–said it recognizes “the need for a legal structure to prevent harmful uses of the technology and to preserve personal privacy so that all individuals embrace new, data-driven technologies.”

Perhaps most importantly, Intel said, is a law that protects individuals and enables “the ethical use of data.” It continued, “Ethical use of data will be critical as we use data to train artificial intelligence algorithms to detect bias and enhance cyber security. In short, it takes data to protect data.”

“The US needs a law that promotes ethical data stewardship, not one that just attempts to minimize harm,” the company said.

Major elements of Intel’s proposal include:

  • Appointment of the Federal Trade Commission as enforcer, with the ability to “enforce meaningful but fair sanctions”;
  • Federal preemption of state privacy laws, while preserving the ability of state attorneys general to apply sanctions in situations where the FTC declines to undertake enforcement action;
  • Creation of a “safe harbor” from civil penalties for organizations that adopt “robust privacy programs”;
  • Reliance on Intel’s “rethinking” of existing Fair Information Practice Principles (FIPPs) developed by the Organization for Economic Cooperation and Development regarding data privacy and cross-border data flows;
  • Creation by organizations of new mechanisms for individuals to provide “meaningful consent” and “informed choices” for use of their data;
  • Organizations “narrow and specifically” stating reasons for collecting data;
  • Organizations controlling how data is used by parties to whom data is transferred;
  • Organizations adopting “reasonable measures” to protect data security;
  • Organizations publishing multiple forms of privacy policies including “an explicit notice when particularly sensitive data is being collected”;
  • Ability of individuals to “object” when data collected about them is “incorrect or when its use will disproportionately cause harm”; and
  • Measurements of data quality to adjust for deficiencies caused by limited amounts of data, with one example being sufficiency of “data from ethnic and racial minorities” for the purpose of precision medicine.

Recent