با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت مقاله خود را دریافت کنید (تا مشکل رفع گردد). با تشکر از صبوری شما!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Collection weeding: Innovative processes and tools to ease the burden
جمع آوری علفهای هرز : فرایندها و ابزارهای نوآورانه برای کاهش بار-2020
Evaluating collections and ultimately removing content poses a variety of difficult issues, including choosing appropriate deselection criteria, communicating with stakeholders, providing accountability, and managing the overall timetable to finish projects on time. The Science and Engineering librarians at Brigham Young University evaluated their entire print collection of over 350,000 items within one year, significantly reducing the number of items kept on the open shelves and the physical collection footprint. Keys to accomplishing this project were extensive preparation, tracking progress and accountability facilitated by Google Sheets and an interactive GIS stacks map, and stakeholder feedback facilitated by a novel web-based tool. This case study discusses guidelines to follow and pitfalls to avoid for any organization that is considering a large- or small-scale collection evaluation project.
Keywords: Weeding | Academic libraries | Collection management | Deselection of library materials | Collection evaluation
A grounded theory examination of project managers accountability
بررسی تئوری مبتنی بر پاسخگویی مدیران پروژه-2020
26 interviews were conducted with a snowball sample of project managers to explore how project managers were influenced by accountability arrangements and how they responded to accountability demands. Using a grounded theory approach to code the interview data, this study revealed that project managers develop new skills to respond to accountability demands. These effects are facilitated by the interaction of resource-based mechanisms and reflexivity that interact with the contextual factors of the project. The study broadens the understanding of accountability in project management and suggests a model for further empirical examination.
Keywords: Accountability | Effect of accountability | Project management | Grounded theory
Lessons Learned About Autonomous AI: Finding a Safe, Efficacious, and Ethical Path Through the Development Process
درسهایی که درباره هوش مصنوعی مستقل آموخته اند: یافتن راهی ایمن ، کارآمد و اخلاقی از طریق فرایند توسعه-2020
Artificial intelligence (AI) describes systems capable of making decisions of high cognitive complexity; autonomous AI systems in healthcare are AI systems that make clinical decisions without human oversight. Such rigorously validated medical diagnostic AI systems hold great promise for improving access to care, increasing accuracy, and lowering cost, while enabling specialist physicians to provide the greatest value by managing and treating patients whose outcomes can be improved. Ensuring that autonomous AI provides these benefits requires evaluation of the autonomous AI’s effect on patient outcome, design, validation, data usage, and accountability, from a bioethics and accountability perspective. We performed a literature review of bioethical principles for AI, and derived evaluation rules for autonomous AI, grounded in bioethical principles. The rules include patient outcome, validation, reference standard, design, data usage, and accountability for medical liability. Application of the rules explains successful US Food and Drug Administration (FDA) de novo authorization of an example, the first autonomous point-of-care diabetic retinopathy examination de novo authorized by the FDA, after a preregistered clinical trial. Physicians need to become competent in understanding the potential risks and benefits of autonomous AI, and understand its design, safety, efficacy and equity, validation, and liability, as well as how its data were obtained. The autonomous AI evaluation rules introduced here can help physicians understand limitations and risks as well as the potential benefits of autonomous AI for their patients. (Am J Ophthalmol 2020;214:134–142.
The nature of police shootings in New Zealand: A comparison of mental health and non-mental health events
ماهیت تیراندازی پلیس در نیوزلند: مقایسه سلامت روان و رویدادهای سلامت غیر روانی-2020
The use of firearms by police in mental health-related events has not been previously researched in New Zealand. This study analysed reports of investigations carried out by the Independent Police Conduct Authority between 1995 and 2019. We extracted data relating to mental health state, demographics, setting, police response, outcome of shooting, and whether the individual was known to police, mental health services, and with a history of mental distress or drug use. Of the 258 reports analysed, 47 (18%) involved mental health-related events compared to 211 (82%) classified as non-mental health events. Nineteen (40.4%) of the 47 mental health events resulted in shootings, compared to 31 (14.8%) of the 211 non-mental health events. Of the 50 cases that involvedshootings 38% (n = 19) were identified as mental health events compared to 62% (n = 31) non-mental health events. Over half of the mental health events (n = 11, 57.9%) resulted in fatalities, compared to 35.5% (n = 11)of the non-mental health events. Cases predominantly involved young males. We could not ascertain the ethnicity of individuals from the IPCA reports. Across all shooting events, a high proportion of individuals possessed a weapon, predominantly either a firearm or a knife, and just under half were known to police and hadknown substance use. Of the 19 mental health events, 47.4% (n = 9) of individuals were known to mental health services and in 89.5% (n = 17) of cases wha¯nau (family) were aware of the individual’s current (at the time of theevent) mental health distress and/or history. These findings suggest opportunities to prevent the escalation ofevents to the point where they involve shootings. Lack of ethnicity data limits the accountability of the IPCA and is an impediment to informed discussion of police response to people of different ethnicities, and Mori in particular, in New Zealand.
Keywords: Police | Mental health | Use of force | Firearms | Ethnicity
A Comparative Assessment and Synthesis of Twenty Ethics Codes on AI and Big Data
ارزیابی تطبیقی و سنتز بیست کد اخلاقی در هوش مصنوعی و داده های بزرگ-2020
Up to date, more than 80 codes exist for handling ethical risks of artificial intelligence and big data. In this paper, we analyse where those codes converge and where they differ. Based on an in-depth analysis of 20 guidelines, we identify three procedural action types (1. control and document, 2. inform, 3. assign responsibility) as well as four clusters of ethical values whose promotion or protection is supported by the procedural activities. We achieve a synthesis of previous approaches with a framework of seven principles, combining the four principles of biomedical ethics with three distinct procedural principles: control, transparency and accountability.
Keywords: data ethics | ethical guidelines | artificial intelligence
Transparency and accountability in AI decision support: Explaining and visualizing convolutional neural networks for text information
شفافیت و پاسخگویی در پشتیبانی تصمیم گیری هوش مصنوعی : توضیح و تجسم شبکه های عصبی کانولوشن برای اطلاعات متن-2020
Proliferating applications of deep learning, along with the prevalence of large-scale text datasets, have revolutionized the natural language processing (NLP) field, thereby driving the recent explosive growth. Nevertheless, it is argued that state-of-the-art studies focus excessively on producing quantitative performances superior to existing models, by playing “the Kaggle game.” Hence, the field requires more effort in solving new problems and proposing novel approaches and architectures. We claim that one of the promising and constructive efforts would be to design transparent and accountable artificial intelligence (AI) systems for text analytics. By doing so, we can enhance the applicability and problem-solving capacity of the system for realworld decision support. It is widely accepted that deep learning models demonstrate remarkable performances compared to existing algorithms. However, they are often criticized for being less interpretable, i.e., the “black box.” In such cases, users tend to hesitate to utilize them for decision-making, especially in crucial tasks. Such complexity obstructs transparency and accountability of the overall system, potentially debilitating the deployment of decision support systems powered by AI. Furthermore, recent regulations are emphasizing fairness and transparency in algorithms to a greater extent, turning explanations more compulsory than voluntary. Thus, to enhance the transparency and accountability of the decision support system and preserve the capacity to model complex text data at the same time, we propose the Explaining and Visualizing Convolutional neural networks for Text information (EVCT) framework. By adopting and ameliorating cutting-edge methods in NLP and image processing, the EVCT framework provides a human-interpretable solution to the problem of text classification while minimizing information loss. Experimental results with large-scale, real-world datasets show that EVCT performs comparably to benchmark models, including widely used deep learning models. In addition, we provide instances of human-interpretable and relevant visualized explanations obtained from applying EVCT to the dataset and possible applications for real-world decision support.
Keywords: Convolutional neural network | Machine learning interpretability | Class activation mapping | Explainable artificial intelligence
Trustworthy AI in the Age of Pervasive Computing and Big Data
هوش مصنوعی قابل اعتماد در عصر محاسبات فراگیر و داده های بزرگ-2020
The era of pervasive computing has resulted in countless devices that continuously monitor users and their environment, generating an abundance of user behavioural data. Such data may support improving the quality of service, but may also lead to adverse usages such as surveillance and advertisement. In parallel, Artificial Intelligence (AI) systems are being applied to sensitive fields such as healthcare, justice, or human resources, raising multiple concerns on the trustworthiness of such systems. Trust in AI systems is thus intrinsically linked to ethics, including the ethics of algorithms, the ethics of data, or the ethics of practice. In this paper, we formalise the requirements of trustworthy AI systems through an ethics perspective. We specifically focus on the aspects that can be integrated into the design and development of AI systems. After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
Index Terms: Artificial Intelligence | Pervasive Computing | Ethics | Data Fusion | Transparency | Privacy | Fairness | Accountability | Federated Learning
AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings
حکمرانی هوش مصنوعی در بخش عمومی: سه داستان از مرزهای تصمیم گیری خودکار در تنظیمات دموکراتیک-2020
The rush to understand new socio-economic contexts created by the wide adoption of AI is justified by its far-ranging consequences, spanning almost every walk of life. Yet, the public sector’s predicament is a tragic double bind: its obligations to protect citizens from potential algorithmic harms are at odds with the temptation to increase its own efficiency - or in other words - to govern algorithms, while governing by algorithms. Whether such dual role is even possible, has been a matter of debate, the challenge stemming from algorithms’ intrinsic properties, that make them distinct from other digital solutions, long embraced by the governments, create externalities that rule-based programming lacks. As the pressures to deploy automated decision making systems in the public sector become prevalent, this paper aims to examine how the use of AI in the public sector in relation to existing data governance regimes and national regulatory practices can be intensifying existing power asymmetries. To this end, investigating the legal and policy instruments associated with the use of AI for strenghtening the immigration process control system in Canada; “optimising” the employment services” in Poland, and personalising the digital service experience in Finland, the paper advocates for the need of a common framework to evaluate the potential impact of the use of AI in the public sector. In this regard, it discusses the specific effects of automated decision support systems on public services and the growing expectations for governments to play a more prevalent role in the digital society and to ensure that the potential of technology is harnessed, while negative effects are controlled and possibly avoided. This is of particular importance in light of the current COVID-19 emergency crisis where AI and the underpinning regulatory framework of data ecosystems, have become crucial policy issues as more and more innovations are based on large scale data collections from digital devices, and the real-time accessibility of information and services, contact and relationships between institutions and citizens could strengthen – or undermine - trust in governance systems and democracy.
Keywords: Artificial intelligence | Public sector innovation | Automated decision making | Algorithmic accountability
Banalization discourse in sentenced persons: Some clinical aspects in the penitentiary context
گفتمان تحریم در افراد محکوم : برخی از جنبه های بالینی در زمینه زندان -2020
Objectives. – The use of the term “banalization” has become wides-pread in the judicial and penitentiary context, as a descriptiveway for the professional (penitentiary counsellor, psychologist,magistrate, etc.) to account for the gap between an institutionallysanctioned offense and the convicted person’s point of view. In thisway, we hear in our daily lives about people in the criminal jus-tice system who “banalize their actions.” However, this term lacksa clear definition and an operationality, appearing more as a gene-ral category and sometimes as a “catch-all.” This article aims toquestion the use of banalization in order to give it a more precisedefinition, in particular, regarding its psychodynamic stakes.Method. – Starting from a psychologist’s practice in a penitentiaryservice of insertion and probation, and relying on clinical materialaround banalization discourse, we propose to develop some aspectsof such discursive anchorages that are located at the crossroads of the singular subject and the social reference, or which question thenotion of defense mechanism. A detour through the works of H.Arendt will also allow us to extend the theoretical field to cate-gories of thought activity, individual responsibility, relationship toinstitution and culture (prohibition, law, norms, etc.), and will alsoilluminate the distinction between banality and banalization.Results. – Banalization discourse demonstrates, for the subject, thepsychodynamic stakes in terms of the ability to think through onesactions, to dialecticize one’s individual responsibility, and to situateoneself in a relationship with the other. There is, furthermore, asocial dimension (rules of living together, normative, instituted) atstake. From this point of view, banalization discourse involves thesubject as subject of language and of a social bond, incarnated herein the institutional judiciary and penitentiary context.Discussion. – The discourse of banalization, on the condition of beingquestioned outside of mere moral considerations or judgments,opens up a complex discursive figure for the professional, in light ofpsychodynamic determinisms, references to the institutional sym-bolic framework, and the expression of a language practice withinthe social bond. Banalization questions, from this point of view, themeaning of the sentence and the probation process put in placearound the notion of the offender’s accountability.Conclusions. – Banalization, beyond the person who minimizesher/his actions, refers to a wider clinical vision in a penitentiaryenvironment, since it touches the subjectivation of the judicialevent, the manner in which subjects are included in the social bondand what regulates it, their empathic preoccupations, their abilityto conceptualize their actions, their relationship to a common refe-rence point with its necessary limits. . . In this way, banalizationemerges as a figure of language that must be considered in a waythat goes beyond the mere description of a representation gap.
Keywords: Banalization | Discourse | Penitentiary institution | Capacity to think | Alterity | Responsibility | Social bond
A Type-2 Fuzzy Logic Approach to Explainable AI for regulatory compliance, fair customer outcomes and market stability in the Global Financial Sector
رویکرد منطق فازی نوع 2 به هوش مصنوعی قابل توضیح برای انطباق با مقررات ، نتایج عادلانه مشتری و ثبات بازار در بخش مالی جهانی-2020
The field of Artificial Intelligence (AI) is enjoying unprecedented success and is dramatically transforming the landscape of the financial services industry. However, there is a strong need to develop an accountability and explainability framework for AI in financial services, based on a risk-based assessment of appropriate explainability levels and techniques by use case and domain. This paper proposes a risk management framework for the implementation of AI in banking with consideration of explainability and outlines the implementation requirements to enable AI to achieve positive outcomes for financial institutions and the customers, markets and societies they serve. The work presents the evaluation of three algorithmic approaches (Neural Networks, Logistic Regression and Type 2 Fuzzy Logic with evolutionary optimisation) for nine banking use cases. We review the emerging regulatory and industry guidance on ethical and safe adoption of AI from key markets worldwide and compare leading AI explainability techniques. We will show that the Type-2 Fuzzy Logic models deliver very good performance which is comparable to or lagging marginally behind the Neural Network models in terms of accuracy, but outperform all models for explainability, thus they are recommended as a suitable machine learning approach for use cases in financial services from an explainability perspective. This research is important for several reasons: (i) there is limited knowledge and understanding of the potential for Type-2 Fuzzy Logic as a highly adaptable, high performing, explainable AI technique; (ii) there is limited cross discipline understanding between financial services and AI expertise and this work aims to bridge that gap; (iii) regulatory thinking is evolving with limited guidance worldwide and this work aims to support that thinking; (iv) it is important that banks retain customer trust and maintain market stability as adoption of AI increases.
Keywords: Regulatory Compliance | Accountability and Explainability | Type-2 Fuzzy Logic | Neural Networks