Abstract: Algorithms are widely used in society to make decisions that affect most aspects of our lives, including which school a child can attend, whether a person will be offered credit from a bank, what products are advertised to consumers, and whether someone will receive an interview for a job. Federal, state and local governments are increasingly using algorithms to conduct government services. Algorithmic systems are used to make decisions about government resource allocation (e.g. where fire stations are built or where police are dispatched), expedite government procedures (e.g. public benefits eligibility and compliance), and aid government officials in making important decisions like whether a person will receive bail or a family will receive a follow up visit from a child welfare agency. Despite the importance of these uses and decisions, government agencies frequently procure, develop, and implement algorithmic systems with minimal to no transparency, public notice, community input, oversight, or accountability measures. Procurement officers and agency staff often lack technical expertise to evaluate algorithmic systems, their capabilities, and potential consequences. This creates a knowledge imbalance in contracting, particularly because many algorithmic systems vendors almost exclusively sell to government agencies. Consequently, vendors are able to oversell the utility and value of a system or offer the system at reduced costs, which is difficult for resource constrained agencies to turn down. Algorithms are fallible human creations, so they are embedded with errors and bias like human processes. When algorithmic tools are adopted by government agencies without adequate transparency, accountability, and oversight, their use can threaten civil liberties and exacerbate existing issues within government agencies (e.g. bias, inefficiencies, opacity regarding decision making). We know that federal, state and local governments are increasingly implementing algorithmic systems in their daily practices, but we still do not know how widespread and integrated such algorithmic systems are used at any level of government. The following toolkit is intended to provide legal and policy advocates with a basic understanding of government use of algorithms including, a breakdown of key concepts and questions that may come up when engaging with this issue, an overview of existing research, and summaries of algorithmic systems currently used in government. This toolkit also includes resources for advocates interested in or currently engaged in work to uncover where algorithms are being used and to create transparency and accountability mechanisms.
Keywords: algorithm, deep learning, artificial intelligence, AI, accountability, toolkit