IP Addresses in the Context of Digital Evidence in the Criminal and Civil Case Law of the Slovak Republic
آدرسهای IP در زمینه شواهد دیجیتالی در پرونده کیفری و مدنی جمهوری اسلواکی-2020
Use of IP addresses by courts in their decisions is one of the issues with growing importance. This applies especially at the time of the increased use of the internet as a mean to violate legal provisions of both civil and criminal law. This paper focuses predominantly on two issues: (1) the use of IP addresses as digital evidence in criminal and civil proceedings and possible mistakes in courts approach to this specific evidence, and (2) the anonymisation of IP addresses in cases when IP addresses are to be considered as personal data. This paper analyses the relevant judicial decisions of the Slovak Republic spanning the time period from 2008 to 2019, in which the relevant courts used the IP address as evidence. On this basis, the authors formulate their conclusions on the current state and developing trends in the use of digital evidence in judicial proceedings. The authors demonstrate the common errors that occur in the courts’ decisions as regards the use of IP addresses as evidence in the cases of the IP addresses anonymisation, usage of the in dubio pro reo principle in criminal proceedings, and the relationship between IP addresses and devices and persons.
Keywords: IP address | Digital evidence | Criminal and civil proceedings | Privacy | Personal data | Anonymisation
AI Crimes: A Classification
جرایم هوش مصنوعی: طبقه بندی-2020
Intelligent and machine learning systems have infiltrated cyber-physical systems and smart cities with technologies such as internet of things, image processing, robotics, speech recognition, self-driving, and predictive maintenance. To gain user trust, such systems must be transparent and explainable. Regulations are required to control crimes associated with these technologies. Such regulations and legislations depend on the severity of the artificial intelligence (AI) crimes subject to these regulations, and on whether humans and/or intelligent systems are responsible for committing such crimes, and therefore can benefit from a classification tree of AI crimes. The aim of this paper to review prior work in ethics for AI, and classify AI crimes by producing a classification tree to assist in AI crime investigation and regulation.
Keywords: AI | classification tree | crimes | ethics | explainable AI | transparency | trust | privacy
Towards Security and Privacy for Edge AI in IoT/IoE based Digital Marketing Environments
به سمت امنیت و حفظ حریم خصوصی برای هوش مصنوعی لبه در محیط های بازاریابی دیجیتال مبتنی بر IoT / IoE-2020
Abstract—Edge Artificial Intelligence (Edge AI) is a crucial aspect of the current and futuristic digital marketing Internet of Things (IoT) / Internet of Everything (IoE) environment. Consumers often provide data to marketers which is used to enhance services and provide a personalized customer experience (CX). However, use, storage and processing of data has been a key concern. Edge computing can enhance security and privacy which has been said to raise the current state of the art in these areas. For example, when certain processing of data can be done local to where requested, security and privacy can be enhanced. However, Edge AI in such an environment can be prone to its own security and privacy considerations, especially in the digital marketing context where personal data is involved. An ongoing challenge is maintaining security in such context and meeting various legal privacy requirements as they themselves continue to evolve, and many of which are not entirely clear from the technical perspective. This paper navigates some key security and privacy issues for Edge AI in IoT/IoE digital marketing environments along with some possible mitigations.
Keywords: edge security | edge privacy | edge AI | edge intelligence | artificial intelligence | AI | machine learning | ML | IoT | IoE | edge | cybersecurity | legal | law | digital marketing | smart | GDPR | CCPA | security | privacy
Attacking and defending multiple valuable secrets in a big data world
حمله و دفاع از اسرار چند ارزشمندی در جهان داده های بزرگ-2020
This paper studies the attack-and-defence game between a web user and a whole set of players over this user’s ‘valuable secrets.’ The number and type of these valuable secrets are the user’s private information. Attempts to tap information as well as privacy protection are costly. The multiplicity of secrets is of strategic value for the holders of these secrets. Users with few secrets keep their secrets private with some probability, even though they do not protect them. Users with many secrets protect their secrets at a cost that is smaller than the value of the secrets protected. The analysis also accounts for multiple redundant information channels with cost asymmetries, relating the analysis to attack-and-defence games with a weakest link.
Keywords: Big-data | Privacy | Conflict | Valuable secrets | Attack-and-defence
Hiding Private Information in Images From AI
پنهان کردن اطلاعات خصوصی در تصاویر از هوش مصنوعی-2020
Privacy protection attracts increasing concerns these days. People tend to believe that large social platforms will comply with the agreement to protect their privacy. However, photos uploaded by people are usually not treated to achieve privacy protection. For example, Facebook, the world’s largest social platform, was found leaking photos of millions of users to commercial organizations for big data analytics. A common analytical tool used by these commercial organizations is the Deep Neural Network (DNN). Today’s DNN can accurately identify people’s appearance, body shape, hobbies and even more sensitive personal information, such as addresses, phone numbers, emails, bank cards and so on. To enable people to enjoy sharing photos without worrying about their privacy, we propose an algorithm that allows users to selectively protect their privacy while preserving the contextual information contained in images. The results show that the proposed algorithm can select and perturb private objects to be protected among multiple optional objects so that the DNN can only identify non-private objects in images.
Index Terms: privacy | object detection | deep learning
Trustworthy AI in the Age of Pervasive Computing and Big Data
هوش مصنوعی قابل اعتماد در عصر محاسبات فراگیر و داده های بزرگ-2020
The era of pervasive computing has resulted in countless devices that continuously monitor users and their environment, generating an abundance of user behavioural data. Such data may support improving the quality of service, but may also lead to adverse usages such as surveillance and advertisement. In parallel, Artificial Intelligence (AI) systems are being applied to sensitive fields such as healthcare, justice, or human resources, raising multiple concerns on the trustworthiness of such systems. Trust in AI systems is thus intrinsically linked to ethics, including the ethics of algorithms, the ethics of data, or the ethics of practice. In this paper, we formalise the requirements of trustworthy AI systems through an ethics perspective. We specifically focus on the aspects that can be integrated into the design and development of AI systems. After discussing the state of research and the remaining challenges, we show how a concrete use-case in smart cities can benefit from these methods.
Index Terms: Artificial Intelligence | Pervasive Computing | Ethics | Data Fusion | Transparency | Privacy | Fairness | Accountability | Federated Learning
Towards privacy preserving AI based composition framework in edge networks using fully homomorphic encryption
به سمت حفظ حریم خصوصی و حفظ چارچوب ترکیب مبتنی بر هوش مصنوعی در شبکه های لبه ای با استفاده از رمزنگاری کاملاً همگن-2020
We present a privacy-preserving framework for Artificial Intelligence (AI) enabled composition for the edge networks. Edge computing is a very promising technology for provisioning realtime AI services due to low response time and network bandwidth requirements. Due to the lack of computational capabilities, an edge device alone cannot provide the complex AI services. Complex AI tasks should be divided into multiple subtasks and distributed among multiple edge devices for efficient service provisioning in the edge network. AI-enabled or automatic service composition is one of the essential AI tasks in the service provisioning. In edge computing-based service provisioning, service composition related tasks need to be offloaded to several edge nodes for efficient service. Edge nodes can be used for monitoring services, storing Qualityof- Service (QoS) data, and composing services to find the best composite service. Existing service composition methods use plaintext QoS data. Hence, attackers may compromise edge devices to reveal QoS data of services and modify them for giving an advantage to particular edge service providers, and the AI-based service composition becomes biased. From that point of view, a privacy-preserving framework for AI-based service composition is required for the edge networks. In our proposed framework, we introduce an AI-based composition model for edge services in the edge networks. Additionally, we present a privacy-preserving AI service composition framework to perform composition on encrypted QoS data using fully homomorphic encryption (FHE) algorithm. We conduct several experiments to evaluate the performance of our proposed privacy-preserving service composition framework using a synthetic QoS dataset.
Keywords: Edge-AI | Artificial Intelligence | Privacy in edge networks | Privacy-preserving AI | Privacy-preserving AI-based service | composition | Privacy-preserving service composition
Knowledge Federation: A Unified and Hierarchical Privacy-Preserving AI Framework
فدراسیون دانش: یک چارچوب متحد و سلسله مراتبی حفظ حریم خصوصی هوش مصنوعی-2020
With strict protections and regulations of data privacy and security, conventional machine learning based on centralized datasets is confronted with significant challenges, making artificial intelligence (AI) impractical in many missioncritical and data-sensitive scenarios, such as finance, government, and health. In the meantime, tremendous datasets are scattered in isolated silos in various industries, organizations, different units of an organization, or different branches of an international organization. These valuable data resources are well underused. To advance AI theories and applications, we propose a comprehensive framework (called Knowledge Federation - KF) to address these challenges by enabling AI while preserving data privacy and ownership. Beyond the concepts of federated learning and secure multi-party computation, KF consists of four levels of federation: (1) information level, low-level statistics and computation of data, meeting the requirements of simple queries, searching and simplistic operators; (2) model level, supporting training, learning, and inference; (3) cognition level, enabling abstract feature representation at various levels of abstractions and contexts; (4) knowledge level, fusing knowledge discovery, representation, and reasoning. We further clarify the relationship and differentiation between knowledge federation and other related research areas. We have developed a reference implementation of KF, called iBond Platform, to offer a productionquality KF platform to enable industrial applications in finance, insurance, marketing, and government. The iBond platform will also help establish the KF community and a comprehensive ecosystem and usher in a novel paradigm shift towards secure, privacy-preserving and responsible AI. As far as we know, knowledge federation is the first hierarchical and unified framework for secure multi-party computing (statistics, queries, searching, and low-level operations) and learning (training, representation, discovery, inference, and reasoning).
Index Terms: Knowledge Federation |Knowledge | Federated Learning | Secure Multi-party Computation | Secure Multi-party Learning
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
هوش مصنوعی قابل توضیح (XAI): مفاهیم ، طبقه بندی ها ، فرصت ها و چالش ها در برابر هوش مصنوعی مسئول-2020
In the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine Learning, the entire community stands in front of the barrier of explainability, an inherent problem of the latest techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI (namely, expert systems and rule based models). Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is widely acknowledged as a crucial feature for the practical deployment of AI models. The overview presented in this article examines the existing literature and contributions already done in the field of XAI, including a prospect toward what is yet to be reached. For this purpose we summarize previous efforts made to define explainability in Machine Learning, establishing a novel definition of explainable Machine Learning that covers such prior conceptual propositions with a major focus on the audience for which the explainability is sought. Departing from this definition, we propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at explaining Deep Learning methods for which a second dedicated taxonomy is built and examined in detail. This critical literature analysis serves as the motivating background for a series of challenges faced by XAI, such as the interesting crossroads of data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence , namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to the field of XAI with a thorough taxonomy that can serve as reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.
Keywords: Explainable Artificial Intelligence | Machine Learning | Deep Learning | Data Fusion | Interpretability | Comprehensibility | Transparency | Privacy | Fairness | Accountability | Responsible Artificial Intelligence
Privacy-preserving clustering for big data in cyber-physical-social systems: Survey and perspectives
خوشه بندی حفظ حریم خصوصی برای داده های بزرگ در سیستم های سایبر-فیزیکی-اجتماعی: بررسی و چشم انداز-2020
Clustering technique plays a critical role in data mining, and has received great success to solve application problems like community analysis, image retrieval, personalized rec- ommendation, activity prediction, etc. This paper first reviews the traditional clustering and the emerging multiple clustering methods, respectively. Although the existing meth- ods have superior performance on some small or certain datasets, they fall short when clustering is performed on CPSS big data because of the high cost of computation and stor- age. With the powerful cloud computing, this challenge can be effectively addressed, but it brings enormous threat to individual or company’s privacy. Currently, privacy preserving data mining has attracted widespread attention in academia. Compared to other reviews, this paper focuses on privacy preserving clustering technique, guiding a detailed overview and discussion. Specifically, we introduce a novel privacy-preserving tensor-based multi- ple clustering, propose a privacy-preserving tensor-based multiple clustering analytic and service framework, and give an illustrated case study on the public transportation dataset. Furthermore, we indicate the remaining challenges of privacy preserving clustering and discuss the future significant research in this area.
Keywords: CPSS | Big data | Cloud computing | Privacy preserving | Clustering