دانلود و نمایش مقالات مرتبط با اعتماد::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - اعتماد

تعداد مقالات یافته شده: 294
ردیف عنوان نوع
1 The use of big data and data mining in nurse practitioner clinical education
استفاده از داده های بزرگ و داده کاوی در آموزش بالینی پزشکان -2020
Nurse practitioner (NP) faculty have not fully used data collected in NP clinical education for data mining. With current advances in database technology including data storage and computing power, NP faculty have an opportunity to data mine enormous amounts of clinical data documented by NP students in electronic clinical management systems. The purpose of this project was to examine the use of big data and data mining from NP clinical education and to establish a foundation for competency-based education. Using a data mining knowledge discovery process, faculty are able to gain increased understanding of clinical practicum experiences to inform competency-based NP education and the use of entrusted professional activities for the future.
Keywords: Big data | Data mining | Nurse practitioner clinical education | Competency-based education | Nurse Practitioner Core Competencies | Entrustable professional activities
مقاله انگلیسی
2 In law we trust: Lawyer CEOs and stock liquidity
اعتماد ما به قانون : مدیرعامل وکالت و نقدینگی سهام-2020
I find that about 8.5% of firms in the sample of S&P 1500 firms are run by CEOs with a law degree (lawyer CEOs) and these firms have higher stock market liquidity than non-lawyer run CEO firms. I also find stock market liquidity improves following the appointment of lawyer CEOs. Lawyer CEOs improve stock market liquidity because they improve the firm’s information environment and reduce firm risk. Firms led by CEOs with legal expertise are associated with less stock price delay, weaker market reactions to corporate earnings announcements, and lower insider trading profits. Overall, this paper highlights the importance of CEO characteristics in enhancing financial market quality.
Keywords: Stock market liquidity | CEOs | Legal education | Insider trading
مقاله انگلیسی
3 AI Crimes: A Classification
جرایم هوش مصنوعی: طبقه بندی-2020
Intelligent and machine learning systems have infiltrated cyber-physical systems and smart cities with technologies such as internet of things, image processing, robotics, speech recognition, self-driving, and predictive maintenance. To gain user trust, such systems must be transparent and explainable. Regulations are required to control crimes associated with these technologies. Such regulations and legislations depend on the severity of the artificial intelligence (AI) crimes subject to these regulations, and on whether humans and/or intelligent systems are responsible for committing such crimes, and therefore can benefit from a classification tree of AI crimes. The aim of this paper to review prior work in ethics for AI, and classify AI crimes by producing a classification tree to assist in AI crime investigation and regulation.
Keywords: AI | classification tree | crimes | ethics | explainable AI | transparency | trust | privacy
مقاله انگلیسی
4 Addressing AI ethics through codification
پرداختن به اخلاق هوش مصنوعی از طریق تدوین-2020
AI ethics rapidly becomes one of the most significant issues in assessing the impact of AI on social welfare and development. A technology that does not meet the ethical criteria of a society is likely to face a long and hard process of acceptance regardless of its potentially tremendous positive potential for long-term socio-economic development. The development of artificial intelligence (AI) technologies is undoubtedly associated with the need to answer ethical questions, and the perception of AI in society will be largely determined by compliance with ethical criteria, whether written or not. At the same time, AI as a technological system itself does not have a natural ethical content; the authors believe that in practice ethical concerns may be addressed by means of ethical codes and compliance rules that articulate what constitutes ethical behaviour in specific areas of application of AI systems. Such a set of rules (a code for AI ethics) could be followed by all actors throughout the complete lifecycle of the system starting with the design stage. The specification of general ethical principles as industry-specific codes of practice would also facilitate classification, evaluation and measurement of systems, both at the technical level and at the level of public perception and trust. The article considers examples of codification of ethical principles and offers several approaches for practical use in solving issues of ethics in the field of AI at the national and international level.
Keywords: ethics | AI | codification | regulation | standards | responsibility | bias | trustworthiness | personal data protection | international cooperation in AI | soft regulation
مقاله انگلیسی
5 Trustworthy AI Development Guidelines for Human System Interaction
دستورالعمل های قابل اعتماد توسعه هوش مصنوعی برای تعامل سیستم انسانی-2020
Artificial Intelligence (AI) is influencing almost all areas of human life. Even though these AI-based systems frequently provide state-of-the-art performance, humans still hesitate to develop, deploy, and use AI systems. The main reason for this is the lack of trust in AI systems caused by the deficiency of transparency of existing AI systems. As a solution, “Trustworthy AI” research area merged with the goal of defining guidelines and frameworks for improving user trust in AI systems, allowing humans to use them without fear. While trust in AI is an active area of research, very little work exists where the focus is to build human trust to improve the interactions between human and AI systems. In this paper, we provide a concise survey on concepts of trustworthy AI. Further, we present trustworthy AI development guidelines for improving the user trust to enhance the interactions between AI systems and humans, that happen during the AI system life cycle.
Index Terms: Trustworthy AI | Transparency | Explainable AI | Human System Interactions | Human Machine Interactions | AI Life Cycle
مقاله انگلیسی
6 Build confidence and acceptance of AI-based decision support systems - Explainable and liable AI
اعتماد به نفس و پذیرش مبتنی بر هوش مصنوعی ایجاد کنید سیستم های پشتیبانی تصمیم - هوش مصنوعی قابل توضیح و مسئولیت پذیر-2020
Artificial Intelligence has known an incredible development since 2012. It was due to the impressive improvement of sensors, data quality and quantity, storage and computing capacity, etc. The promises AI offered led many scientific domains to implement AI-based decision support tool. However, despite numerous amazing results, very serious failures have raised Human mistrust, fear and scorn against AI. In Industries, staff members cannot afford to use tools that might fail them. This is especially true for Transportation operators where security and safety are at risk. Then, the question that arises is how to build Human confidence and acceptance of AI-based decision support system. In this paper, we combine different points of view to propose a structured overview of Transparency, Explicability and Interpretability, with new definitions arising as a consequence. Then we discuss the need for understandable information from the AI system, to legitimate or refute the tool’s proposal. To conclude we offer ethical reflexions and ideas to develop confidence in AI.
Keywords: explainable AI | liable AI | decision support system | confidence | technology
مقاله انگلیسی
7 امضای کوانتومی مبتنی بر هویت بر پایه حالات بل
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 8 - تعداد صفحات فایل doc فارسی: 17
براساس حالت های بل، یک طرح امضای کوانتومی مبتنی بر هویت پیشنهاد شده‌ است. در طرح ما، کلید مخصوص امضا کننده توسط یک شخص ثالث قابل‌ اعتماد به نام تولید کننده کلید خصوصی (PKG) تولید می‌شود، در حالی که کلید عمومی امضا کننده هویت او (مرد)/او (زن) (مانند نام او یا آدرس ایمیل) است. پیغامی که باید امضا شود به ترتیب کد حالت های بل کدگذاری (رمزنگاری) می‌شود. برای ایجاد امضای کوانتومی، امضا کننده توالی حالت بل را با کلید خصوصی او (مرد)/او (زن) امضا می‌کند. امضای کوانتومی را می توان توسط هر کسی با هویت امضا کننده تایید کرد. طرح امضای کوانتومی ما از مزایای طرح امضای کلاسیک مبتنی بر هویت برخوردار است. نیازی به استفاده از حافظه کوانتومی بلند مدت ندارد. از سوی دیگر، در طرح ما، در طول مرحله تایید امضا، بازبینی کننده نیازی به انجام هیچ آزمون مبادله ی کوانتومی ندارد. در طرح ما، تولید کننده کلید خصوصی یا PKG می‌تواند سبب از دست دادن امضای کوانتومی شود که در بسیاری از طرح‌های امضا کوانتومی قابل‌اجرا نیست. طرح ما همچنین دارای ویژگی‌های امنیتی غیرانکار و غیر قابل جعل و غیره است. امضای ما مطمئنتر، کارآمد و عملی تر از طرح‌ های مشابه دیگر است.
کلمات کلیدی: امضای کوانتومی | امضای مبتنی بر هویت | حالت بل | آزمون کوانتومی مبادله ای
مقاله ترجمه شده
8 Productive employment and decent work: The impact of AI adoption on psychological contracts, job engagement and employee trust
اشتغال مولد و کار مناسب: تأثیر پذیرش هوش مصنوعی در قراردادهای روانشناختی ، مشاغل شغلی و اعتماد کارمندان-2020
This research examines the tension between the aims of the United Nations’ Sustainable Development Goal 8 (SDG 8), to promote productive employment and decent work, and the adoption of Artificial Intelligence (AI). Our findings are based on the analysis of 232 survey results, where we tested the effects of AI adoption on workers’ psychological contract, engagement and trust. We find that psychological contracts had a significant, positive effect on job engagement and on trust. Yet, with AI adoption, the positive effect of psychological contracts fell significantly. A further re-examination of the extant literature leads us to posit that AI adoption fosters the creation of a third type of psychological contract, which we term “Alienational”. Whereas SDG 8 is premised on strengthening relational contracts between an organization and its employees, the adoption of AI has the opposite effect, detracting from the very nature of decent work.
Keywords: Artificial intelligence | Psychological contract | Employee engagement | Job trust | Sustainable development goals | Decent work
مقاله انگلیسی
9 Mobility-aware load Balancing for Reliable Self-Organization Networks : Multi-agent Deep Reinforcement Learning
توازن بار سیار اگاه برای شبکه های خود سازماندهی قابل اعتماد : یادگیری تقویتی عمیق چند عاملی-2020
Self-Organizing Networks (SON) is a collection of functions for automatic configuration, optimization, and healing of networks and mobility optimization is one of the main functions of self-organized cellular networks. State of the art Mobility Robustness Optimization (MRO) schemes have relied on rule-based recommended systems to search the parameter space; yet it is unwieldy to design rules for all possible mobility patterns in any network. In this regard, we presented a Deep Learning-based MRO solution (DRL-MRO), which learns the required parameters appropriate values for each mobility pattern in individual cells. Optimal mobility setting for Handover parameters also depends on the user distribution and their velocities in the network. In this framework, an effective mobility-aware load balancing approach applied for autonomous methods of configuring the parameters in accordance with the mobility patterns in which approximately the same quality level is provided for each subscriber. The simulation results show that the function of mobility robustness optimization not only learns to optimize HO performance, but also it learns how to distribute excess load throughout the network. The experimental results prove that this solution minimizes the number of unsatisfied subscribers (Nus) and it can also guarantee a more balanced network using cell load sharing in addition to increase cell throughput outperform the current schemes.
Keywords: Distributed Learning Automat | Self- Optimization Networking | Mobility | Management | Cognitive Cellular Networks | Load Balancing
مقاله انگلیسی
10 Trustworthy AI in the Age of Pervasive Computing and Big Data
هوش مصنوعی قابل اعتماد در عصر محاسبات فراگیر و داده های بزرگ-2020
The era of pervasive computing has resulted in countless devices that continuously monitor users and their environment, generating an abundance of user behavioural data. Such data may support improving the quality of service, but may also lead to adverse usages such as surveillance and advertisement. In parallel, Artificial Intelligence (AI) systems are being applied to sensitive fields such as healthcare, justice, or human resources, raising multiple concerns on the trustworthiness of such systems. Trust in AI systems is thus intrinsically linked to ethics, including the ethics of algorithms, the ethics of data, or the ethics of practice. In this paper, we formalise the requirements of trustworthy AI systems through an ethics perspective. We specifically focus on the aspects that can be integrated into the design and development of AI systems. After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
Index Terms: Artificial Intelligence | Pervasive Computing | Ethics | Data Fusion | Transparency | Privacy | Fairness | Accountability | Federated Learning
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi