Cath, C. (2018). Governing Artificial Intelligence: Ethical, Legal and Technical Opportunities and Challenges. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180080.
Abstract: This paper is the introduction to the special issue entitled: Governing artificial intelligence: ethical, legal and technical opportunities and challenges. Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance.
Keywords: artificial intelligence, AI
|
Floridi, L. (2018). Soft Ethics, the Governance of the Digital and the General Data Protection Regulation. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180081.
Abstract: The article discusses the governance of the digital as the new challenge posed by technological innovation. It then introduces a new distinction between soft ethics, which applies after legal compliance with legislation, such as the General Data Protection Regulation in the European Union, and hard ethics, which precedes and contributes to shape legislation. It concludes by developing an analysis of the role of digital ethics with respect to digital regulation and digital governance.
Keywords: artificial intelligence, AI
|
Kroll, J. A. (2018). The Fallacy of Inscrutability. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180084.
Abstract: Contrary to the criticism that mysterious, unaccountable black-box software systems threaten to make the logic of critical decisions inscrutable, we argue that algorithms are fundamentally understandable pieces of technology. Software systems are designed to interact with the world in a controlled way and built or operated for a specific purpose, subject to choices and assumptions. Traditional power structures can and do turn systems into opaque black boxes, but technologies can always be understood at a higher level, intensionally in terms of their designs and operational goals and extensionally in terms of their inputs, outputs and outcomes. The mechanisms of a systems operation can always be examined and explained, but a focus on machinery obscures the key issue of power dynamics. While structural inscrutability frustrates users and oversight entities, system creators and operators always determine that the technologies they deploy are fit for certain uses, making no system wholly inscrutable. We investigate the contours of inscrutability and opacity, the way they arise from power dynamics surrounding software systems, and the value of proposed remedies from disparate disciplines, especially computer ethics and privacy by design. We conclude that policy should not accede to the idea that some systems are of necessity inscrutable. Effective governance of algorithms comes from demanding rigorous science and engineering in system design, operation and evaluation to make systems verifiably trustworthy. Rather than seeking explanations for each behaviour of a computer system, policies should formalize and make known the assumptions, choices, and adequacy determinations associated with a system.
Keywords: artificial intelligence, AI
|
Nemitz, P. (2018). Constitutional Democracy and Technology in the Age of Artificial Intelligence. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180089.
Abstract: Given the foreseeable pervasiveness of artificial intelligence (AI) in modern societies, it is legitimate and necessary to ask the question how this new technology must be shaped to support the maintenance and strengthening of constitutional democracy. This paper first describes the four core elements of todays digital power concentration, which need to be seen in cumulation and which, seen together, are both a threat to democracy and to functioning markets. It then recalls the experience with the lawless Internet and the relationship between technology and the law as it has developed in the Internet economy and the experience with GDPR before it moves on to the key question for AI in democracy, namely which of the challenges of AI can be safely and with good conscience left to ethics, and which challenges of AI need to be addressed by rules which are enforceable and encompass the legitimacy of democratic process, thus laws. The paper closes with a call for a new culture of incorporating the principles of democracy, rule of law and human rights by design in AI and a three-level technological impact assessment for new technologies like AI as a practical way forward for this purpose.
Keywords: artificial intelligence, AI
|
Marda, V. (2018). Artificial Intelligence Policy in India: A Framework for Engaging the Limits of Data-driven Decision-making. Philosophical Transactions of the Royal Society a: Mathematical, Physical and Engineering Sciences, 376(2133), 20180087.
Abstract: Artificial intelligence (AI) is an emerging focus area of policy development in India. The countrys regional influence, burgeoning AI industry and ambitious governmental initiatives around AI make it an important jurisdiction to consider, regardless of where the reader of this article lives. Even as existing policy processes intend to encourage the rapid development of AI for economic growth and social good, an overarching trend persists in India, and several other jurisdictions: the limitations and risks of data-driven decisions still feature as retrospective considerations for development and deployment of AI applications. This article argues that the technical limitations of AI systems should be reckoned with at the time of developing policy, and the societal and ethical concerns that arise due to such limitations should be used to inform what policy processes aspire to achieve. It proposes a framework for such deliberation to occur, by analysing the three main stages of bringing machine learning (the most popular subset of AI techniques) to deployment–the data, model and application stage. It is written against the backdrop of Indias current AI policy landscape, and applies the proposed framework to ongoing sectoral challenges in India. With a view to influence existing policy deliberation in the country, it focuses on potential risks that arise from data-driven decisions in general, and in the Indian context in particular.
Keywords: artificial intelligence, AI
|