The White House Office of Science and Technology Policy (OSTP), as part of a larger effort to maintain the U.S.’s leadership in Artificial Intelligence (AI) development, released 10 principles on forming regulations for AI created by private firms.
The principles were released as part of the administration’s American AI Initiative, which President Donald Trump initiated via executive order last year. According to MIT Technology Review, the principles have three main goals: “to ensure public engagement, limit regulatory overreach, and, most important, promote trustworthy AI that is fair, transparent, and safe.”
The principles are as follows and are translated by MIT:
- Public trust in AI. The government must promote reliable, robust, and trustworthy AI applications.
- Public participation. The public should have a chance to provide feedback in all stages of the rule-making process.
- Scientific integrity and information quality. Policy decisions should be based on science.
- Risk assessment and management. Agencies should decide which risks are and aren’t acceptable.
- Benefits and costs. Agencies should weigh the societal impacts of all proposed regulations.
- Flexibility. Any approach should be able to adapt to rapid changes and updates to AI applications.
- Fairness and nondiscrimination. Agencies should make sure AI systems don’t discriminate illegally.
- Disclosure and transparency. The public will trust AI only if it knows when and how it is being used.
- Safety and security. Agencies should keep all data used by AI systems safe and secure.
- Interagency coordination. Agencies should talk to one another to be consistent and predictable in AI-related policies.