AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings
حکمرانی هوش مصنوعی در بخش عمومی: سه داستان از مرزهای تصمیم گیری خودکار در تنظیمات دموکراتیک-2020
The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency - or in other words - to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector. In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine - trust in governance systems and democracy.
Keywords: Artificial intelligence | Public sector innovation | Automated decision making | Algorithmic accountability
Beyond mystery: Putting algorithmic accountability in context
فراتر از رمز و راز: پاسخگویی الگوریتمی در زمینه-2019
Critical algorithm scholarship has demonstrated the difficulties of attributing accountability for the actions and effects of algorithmic systems. In this commentary, we argue that we cannot stop at denouncing the lack of accountability for algorithms and their effects but must engage the broader systems and distributed agencies that algorithmic systems exist within; including standards, regulations, technologies, and social relations. To this end, we explore accountability in ‘‘the Generated Detective,’’ an algorithmically generated comic. Taking up the mantle of detectives ourselves, we investigate accountability in relation to this piece of experimental fiction. We problematize efforts to effect accountability through transparency by undertaking a simple operation: asking for permission to re-publish a set of the algorithmically selected and modified words and images which make the frames of the comic. Recounting this process, we demonstrate slippage between the ‘‘complication’’ of the algorithm and the obscurity of the legal and institutional structures in which it exists.
Keywords: Algorithms | normativity | accountability | responsibility | mystery | detective
Big Data and security policies: Towards a framework for regulating the phases of analytics and use of Big Data
سیاست های داده های بزرگ و امنیت: به چارچوبی برای تنظیم مراحل تجزیه و تحلیل و استفاده از داده های بزرگ-2017
computer law & s e c u r i t y review 33 ( 2 0 1 7 ) 309–323 http://dx.doi.org/10.1016/j.clsr.2017.03.002 Available online at www.sciencedirect.com www.compseconline.com/publications/prodclaw.htm A B S T R A C T Big Data analytics in national security, law enforcement and the fight against fraud have the potential to reap great benefits for states, citizens and society but require extra safeguards to protect citizens’ fundamental rights. This involves a crucial shift in emphasis from regulating Big Data collection to regulating the phases of analysis and use. In order to benefit from the use of Big Data analytics in the field of security, a framework has to be developed that adds new layers of protection for fundamental rights and safeguards against erroneous and malicious use. Additional regulation is needed at the levels of analysis and use, and the oversight regime is in need of strengthening. At the level of analysis – the algorithmic heart of Big Data processes – a duty of care should be introduced that is part of an internal audit and external review procedure. Big Data projects should also be subject to a sunset clause. At the level of use, profiles and (semi-) automated decision-making should be regulated more tightly. Moreover, the responsibility of the data processing party for accuracy of analysis – and decisions taken on its basis – should be anchored in legislation. The general and security-specific oversight functions should be strengthened in terms of technological expertise, access and resources. The possibilities for judicial review should be expanded to stimulate the development of case law. © 2017 Dennis Broeders, Erik Schrijvers, Bart van der Sloot, Rosamunde van Brakel, Josta de Hoog & Ernst Hirsch Ballin. Published by Elsevier Ltd. All rights reserved.
Keywords:Big Data | Security | Data protection | Privacy | Regulation | Fraud | Policing | Surveillance | Algorithmic accountability | the Netherlands