Continue reading on DataGuidance with:
Free Member
Limited ArticlesCreate an account to continue accessing select articles, resources, and guidance notes.
Already have an account? Log in
USA: EPIC issues report on State AI Procurement
On September 14, 2023, the Electronic Privacy Information Center (EPIC) published the report Outsourced and Automated on the use of artificial intelligence (AI) by government organizations. Firstly, the report highlights the risks of the use of private AI in government processes, including the risks involving data use and data privacy, risks involving the accuracy, bias, and reliability of outputs, and risks that undermine government authority and accountability. These risks may manifest themselves, the report provides, through risk scoring systems that make determinations about individuals, eligibility screening for applicants to government services, fraud detection by matching applicants with commercial databases, and predictive policing of automated surveillance systems.
Secondly, the report discusses the outsourcing and automation of government programs through private AI systems. The report details that the sourcing of procurement of AI systems owes to many of the largest AI vendors marketing their systems to state agencies and state legislatures, and the fact that state agencies often struggle to attract employees with AI expertise. Specifically, the report attributes the procurement process of competitive bidding, non-competitive bidding in emergency or routine procurement processes, and cooperative purchasing to increased adoption of AI systems by government organizations.
Finally, the report also provides recommendations for the reform of AI procurement by government organizations. Notably, to restrict the potential of AI systems harm, the report recommends:
- the establishment of processes for auditing AI systems and restricting the most harmful uses;
- the imposition of protective language in AI contracts by law to empower agencies during contract negotiations and while monitoring AI systems;
- increased transparency and support for public recourse for AI harms, including contractual rights for those receiving government support; and
- pursuing non-AI options when agencies cannot mitigate AI harms.