دانلود و نمایش مقالات مرتبط با Mobile Devices::صفحه 1
بلافاصله پس از پرداخت دانلود کنید
نتیجه جستجو - Mobile Devices

تعداد مقالات یافته شده: 84
ردیف عنوان نوع
1 Multiple contents offloading mechanism in AI-enabled opportunistic networks
مکانیسم تخلیه محتوای چندگانه در شبکه های فرصت طلب مجهز به هوش مصنوعی-2020
With the rapid growth of mobile devices and the emergence of 5G applications, the burden of cellular and the use of the licensed band have enormous challenges. In order to solve this problem, opportunity communication is regarded as a potential solution. It can use unlicensed bands to forward content to users under delay-tolerance constraints, as well as reduce cellular data traffic. Since opportunity communication is easily interrupted when User Equipment (UE) is moving, we adopt Artificial Intelligence (AI) to predict the location of the mobile UE. Then, the meta-heuristic algorithm is used to allocate multiple contents. In addition, deep learning-based methods almost need a lot of training time. Based on real-time requirements of the network, we propose AI-enabled opportunistic networks architecture, combined with Mobile Edge Computing (MEC) to implement edge AI applications. The simulation results show that the proposed multiple contents offloading mechanism can reduce cellular data traffic through UE location prediction and cache allocation.
Keywords: Opportunistic networks | MEC | Offloading | Content caching
مقاله انگلیسی
2 Reinforcement learning application in diabetes blood glucose control : A systematic review
کاربرد یادگیری تقویتی در کنترل قند خون دیابت : یک بررسی سیستماتیک-2020
Background: Reinforcement learning (RL) is a computational approach to understanding and automating goaldirected learning and decision-making. It is designed for problems which include a learning agent interacting with its environment to achieve a goal. For example, blood glucose (BG) control in diabetes mellitus (DM), where the learning agent and its environment are the controller and the body of the patient respectively. RL algorithms could be used to design a fully closed-loop controller, providing a truly personalized insulin dosage regimen based exclusively on the patient’s own data. Objective: In this review we aim to evaluate state-of-the-art RL approaches to designing BG control algorithms in DM patients, reporting successfully implemented RL algorithms in closed-loop, insulin infusion, decision support and personalized feedback in the context of DM. Methods: An exhaustive literature search was performed using different online databases, analyzing the literature from 1990 to 2019. In a first stage, a set of selection criteria were established in order to select the most relevant papers according to the title, keywords and abstract. Research questions were established and answered in a second stage, using the information extracted from the articles selected during the preliminary selection. Results: The initial search using title, keywords, and abstracts resulted in a total of 404 articles. After removal of duplicates from the record, 347 articles remained. An independent analysis and screening of the records against our inclusion and exclusion criteria defined in Methods section resulted in removal of 296 articles, leaving 51 relevant articles. A full-text assessment was conducted on the remaining relevant articles, which resulted in 29 relevant articles that were critically analyzed. The inter-rater agreement was measured using Cohen Kappa test, and disagreements were resolved through discussion. Conclusions: The advances in health technologies and mobile devices have facilitated the implementation of RL algorithms for optimal glycemic regulation in diabetes. However, there exists few articles in the literature focused on the application of these algorithms to the BG regulation problem. Moreover, such algorithms are designed for control tasks as BG adjustment and their use have increased recently in the diabetes research area, therefore we foresee RL algorithms will be used more frequently for BG control in the coming years. Furthermore, in the literature there is a lack of focus on aspects that influence BG level such as meal intakes and physical activity (PA), which should be included in the control problem. Finally, there exists a need to perform clinical validation of the algorithms.
Keywords: Reinforcement learning | Blood glucose control | Artificial pancreas | Closed-loop | Insulin infusion
مقاله انگلیسی
3 A Survey on the Computation Offloading Approaches in Mobile Edge Computing: A Machine Learning-based Perspective
بررسی رویکردهای بارگیری محاسبات در محاسبات لبه موبایل: دیدگاه مبتنی بر یادگیری ماشین-2020
With the rapid developments in emerging mobile technologies, utilizing resource-hungry mobile applications such as media processing, online Gaming, Augmented Reality (AR), and Virtual Reality (VR) play an essential role in both businesses and entertainments. To soften the burden of such complexities incurred by fast developments of such serving technologies, distributed Mobile Edge Computing (MEC) has been developed, aimed at bringing the computation environments near the end-users, usually in one hop, to reach predefined requirements. In the literature, offloading approaches are developed to connect the computation environments to mobile devices by transferring resource-hungry tasks to the near servers. Because of some rising problems such as inherent software and hardware heterogeneity, restrictions, dynamism, and stochastic behavior of the ecosystem, the computation offloading issues consider as the essential challenging problems in the MEC environment. However, to the best of the author’s knowledge, in spite of its significance, in machine learning-based (ML-based) computation offloading mechanisms, there is not any systematic, comprehensive, and detailed survey in the MEC environment. In this paper, we provide a review on the ML-based computation offloading mechanisms in the MEC environment in the form of a classical taxonomy to identify the contemporary mechanisms on this crucial topic and to offer open issues as well. The proposed taxonomy is classified into three main fields: Reinforcement learning-based mechanisms, supervised learning-based mechanisms, and unsupervised learning-based mechanisms. Next, these classes are compared with each other based on the essential features such as performance metrics, case studies, utilized techniques, and evaluation tools, and their advantages and weaknesses are discussed, as well. Finally, open issues and uncovered or inadequately covered future research challenges are argued, and the survey is concluded.
Keywords: Computation offloading | Mobile edge computing | Machine learning | Reinforcement learning | Supervised learning | Unsupervised learning
مقاله انگلیسی
4 Authentication and integrity of smartphone videos through multimedia container structure analysis
احراز هویت و یکپارچگی فیلم های تلفن های هوشمند از طریق تجزیه و تحلیل ساختار چند رسانه ای-2020
Nowadays, mobile devices have become the natural substitute for the digital camera, as they capture everyday situations easily and quickly, encouraging users to express themselves through images and videos. These videos can be shared across different platforms exposing them to any kind of intentional manipulation by criminals who are aware of the weaknesses of forensic techniques to accuse an innocent person or exonerate a guilty person in a judicial process. Commonly, manufacturers do not comply 100% with the specifications of the standards for the creation of videos. Also, videos shared on social networks, and instant messaging applications go through filtering and compression processes to reduce their size, facilitate their transfer, and optimize storage on their platforms. The omission of specifications and results of transformations carried out by the platforms embed a features pattern in the multimedia container of the videos. These patterns make it possible to distinguish the brand of the device that generated the video, social network, and instant messaging application that was used for the transfer. Research in recent years has focused on the analysis of AVI containers and tiny video datasets. This work presents a novel technique to detect possible attacks against MP4, MOV, and 3GP format videos that affect their integrity and authenticity. The method is based on the analysis of the structure of video containers generated by mobile devices and their behavior when shared through social networks, instant messaging applications, or manipulated by editing programs. The objectives of the proposal are to verify the integrity of videos, identify the source of acquisition and distinguish between original and manipulated videos.
Keywords: Forensic analysis | Metadata | Mobile device camera | Multimedia container structure | Social network video analysis | Video analysis | Video authenticity | Video integrity
مقاله انگلیسی
5 A machine learning forensics technique to detect post-processing in digital videos
یک روش پزشکی قانونی برای یادگیری ماشین برای تشخیص پس از پردازش در فیلم های دیجیتال-2020
Technology has brought great benefits to human beings and has served to improve the quality of life and carry out great discoveries. However, its use can also involve many risks. Examples include mobile devices, digital cameras and video surveillance cameras, which offer excellent performance and generate a large number of images and video. These files are generally shared on social platforms and are exposed to any manipulation, compromising their authenticity and integrity. In a legal process, a manipulated video can provide the necessary elements to accuse an innocent person of a crime or to exempt a guilty person from criminal acts. Therefore, it is essential to create robust forensic methods, which will strengthen the justice administration systems and thus make fair decisions. This paper presents a novel forensic technique to detect the post-processing of digital videos with MP4, MOV and 3GP formats. Concretely, detect the social platform and editing program used to execute possible manipulation attacks. The proposed method is focused on supervised machine learning techniques. To achieve our goal, we take advantage that the social platforms and editing programs, execute filtering and compression processes on the videos when they are shared or manipulated. The result of these transformations leaves a characteristic pattern in the videos that allow us to detect the social platform or editing program efficiently. Three phases are involved in the method: 1) Dataset preparation; 2) data features extraction; 3) Supervised model creation. To evaluate the scalability of the technique in real scenarios, we used a robust, heterogeneous and far superior dataset than that used in the literature.
Keywords: Editing programs detection | Machine learning processing | Multimedia container structure | Social networks detection | Video forensics | Video post-processing detection
مقاله انگلیسی
6 ‘‘DRL + FL’’: An intelligent resource allocation model based on deep reinforcement learning for Mobile Edge Computing
"DRL + FL": یک مدل تخصیص منابع هوشمند مبتنی بر یادگیری تقویت عمیق برای محاسبات لبه تلفن همراه-2020
With the emergence of a large number of computation-intensive and time-sensitive applications, smart terminal devices with limited resources can only run the model training part of most intelligent applications in the cloud, so a large amount of training data needs to be uploaded to the cloud. This is an important cause of core network communication congestion and poor Quality-of-Experience (QoE) of user. As an important extension and supplement of cloud computing, Mobile Edge Computing (MEC) sinks computing and storage resources from the cloud to the vicinity of User Mobile Devices (UMDs), greatly reducing service latency and alleviating the burden on core networks. However, due to the high cost of edge servers deployment and maintenance, MEC also has the problems of limited network resources and computing resources, and the edge network environment is complex and mutative. Therefore, how to reasonably allocate network resources and computing resources in a changeable MEC environment has become a great aporia. To combat this issue, this paper proposes an intelligent resource allocation model ‘‘DRL + FL’’. Based on this model, an intelligent resource allocation algorithm DDQN-RA based on the emerging DRL algorithm framework DDQN is designed to adaptively allocate network and computing resources. At the same time, the model integrates the FL framework with the mobile edge system to train DRL agents in a distributed way. This model can well solve the problems of uploading large amounts of training data via wireless channels, Non-IID and unbalance of training data when training DRL agents, restrictions on communication conditions, and data privacy. Experimental results show that the proposed ‘‘DRL + FL’’ model is superior to the traditional resource allocation algorithms SDR and LOBO and the intelligent resource allocation algorithm DRLRA in three aspects: minimizing the average energy consumption of the system, minimizing the average service delay, and balancing resource allocation.
Keywords: Mobile edge computing | Intelligent resource allocation | Deep reinforcement learning | Federated learning
مقاله انگلیسی
7 Stochastic Computing based AI System for Mobile Devices
سیستم هوش مصنوعی مبتنی بر محاسبات تصادفی برای دستگاه های تلفن همراه-2020
In this paper, we present a stochastic computing based AI system for mobile devices. As technology in AI advances, more complex computations are required. In case of mobile devices, it is hard to accommodate the entire computations due to power and area limitations of an embedded system. As a stochastic computing replaces the complex computations into simple computations, the mobile devices are available to include the AI system. In order to verify our design, the embedded AI system including stochastic computing is implemented on an fieldprogrammable gate array (FPGA), and we successfully demonstrated the feasibility of the proposal.
مقاله انگلیسی
8 Where to Go Next? : A Realistic Evaluation of AI-Assisted Mobility Predictors for HetNets
بعد کجا برویم؟ : ارزیابی واقعی پیش بینی کننده های تحرک با کمک هوش مصنوعی برای HetNets-2020
Abstract—5G is considered as the ecosystem to abet the ever growing number of mobile devices and users requiring an unprecedented amount of data and highly demanding Quality of Experience (QoE). To accommodate these demands, 5G requires extreme densification of base station deployment, which will result in a network that requires overwhelming efforts to maintain and manage. User mobility prediction in wireless communications can be exploited to overcome these foregoing challenges. Knowledge of where users will go next enables cellular networks to improve handover management. In addition, it allows networks to engage in advanced resource allocation and reservation, cell load prediction and proactive energy saving. However, anticipating the movement of humans is, in itself, a challenge due to the lack of realistic mobility models and insufficiencies of cellular system models in capturing a real network dynamics. In this paper, we have evaluated Artificial Intelligence (AI)-assisted mobility predictors. We model mobility prediction as a multi-class classification problem to predict the future base station association of the mobile users using Extreme Gradient Boosting Trees (XGBoost) and Deep Neural Networks (DNN). Using a realistic mobility model and a 3GPPcompliant cellular network simulator, results show that, XGBoost outperforms DNN with prediction accuracy reaching up to 95% in a heterogeneous network (HetNet) scenario with shadowing varied from 0dB to 4dB.
Index Terms: Mobility prediction | AI | self- organizing networks (SON) | Deep Neural Networks | XGBoost | HetNets
مقاله انگلیسی
9 A novel method for malware detection on ML-based visualization technique
یک روش جدید برای شناسایی بدافزارها در تکنیک تجسم مبتنی بر ML-2020
Malware detection is one of the challenging tasks in network security. With the flourishment of network techniques and mobile devices, the threat from malwares has been of an increasing significance, such as metamorphic malwares, zero-day attack, and code obfuscation, etc . Many machine learning (ML)-based malware detection methods are proposed to address this problem. However, considering the attacks from adversarial examples (AEs) and exponential increase in the malware variant thriving nowadays, malware detection is still an active field of research. To overcome the current limitation, we proposed a novel method using data visualization and adversarial training on ML-based detectors to efficiently detect the different types of malwares and their variants. Experimental results on the MS BIG malware database and the Ember database demonstrate that the proposed method is able to prevent the zero-day attack and achieve up to 97.73% accuracy, along with 96.25% in average for all the malwares tested.
Keywords: Malware detection | Adversarial training | Adversarial examples | Image texture | Data visualization
مقاله انگلیسی
10 Static malware detection and attribution in android byte-code through an end-to-end deep system
شناسایی بدافزارهای استاتیکی و انتساب در بایت کد اندرویدی از طریق یک سیستم عمیق انتها به انتها-2020
Android reflects a revolution in handhelds and mobile devices. It is a virtual machine based, an open source mobile platform that powers millions of smartphone and devices and even a larger no. of applications in its ecosystem. Surprisingly in a short lifespan, Android has also seen a colossal expansion in application malware with 99% of the total malware for smartphones being found in the Android ecosystem. Subsequently, quite a few techniques have been proposed in the literature for the analysis and detection of these malicious applications for the Android platform. The increasing and diversified nature of Android malware has immensely attenuated the usefulness of prevailing malware detectors, which leaves Android users susceptible to novel malware. Here in this paper, as a remedy to this problem, we propose an anti-malware system that uses customized learning models, which are sufficiently deep, and are ’End to End deep learning architectures which detect and attribute the Android malware via opcodes extracted from application bytecode’. Our results show that Bidirectional long short-term memory (BiLSTMs) neural networks can be used to detect static behavior of Android malware beating the state-of-the-art models without using handcrafted features. For our experiments in our system, we also choose to work with distinct and independent deep learning models leveraging sequence specialists like recurrent neural networks, Long Short Term Memory networks and its Bidirectional variation as well as those are more usual neural architectures like a network of all connected layers(fully connected), deep convnets, Diabolo network (autoencoders) and generative graphical models like deep belief networks for static malware analysis on Android. To test our system, we have also augmented a bytecode dataset from three open and independently maintained state-of-the-art datasets. Our bytecode dataset, which is on an order of magnitude large, essentially suffice for our experiments. Our results suggests that our proposed system can lead to better design of malware detectors as we report an accuracy of 0.999 and an F1-score of 0.996 on a large dataset of more than 1.8 million Android applications.
Keywords: End-to-end architecture | Malware analysis | Deep neural networks | Android and big data
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi