دانلود و نمایش مقالات مرتبط با Dynamic environment::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - Dynamic environment

تعداد مقالات یافته شده: 18
ردیف عنوان نوع
1 Evolutionary hash functions for specific domains
توابع هش تکاملی برای دامنه های خاص-2019
Hash functions are a key component of many essential applications, ranging from compilers, databases or internet browsers to videogames or network devices. The same reduced set of functions are extensively used and have become ‘‘standard de facto’’ since they provide very efficient results in searches over unsorted sets. However, depending on the characteristics of the data being hashed, the overall performance of these non-cryptographic hash functions can vary dramatically, becoming a very common source of performance loss. Hash functions are difficult to design, they are extremely non-linear and counterintuitive, and relationships among variables are often intricate and obscure. Surprisingly, very little scientific research is devoted to the design and experimental assessment of these widely used functions. In this work, in addition to performing an up-to-date comparison of state-of-the-art hash functions, we propose the use of evolutionary techniques for designing ‘‘ad hoc’’ non-cryptographic hash functions. Thus, genetic programming will be used to automatically design a tailor-made hash function that can be continuously evolved if needed, so that it is always adapted to real-world dynamic environments. To validate the proposed approach, we have compared several quality metrics for the generated functions and the most widely used non-cryptographic hash functions across eight different scenarios. The results of the evolved hash functions outperformed those of the non-cryptographic hash functions in most of the cases tested.
Keywords: Genetic programming | Hash functions | Evolutionary algorithm | Automated design
مقاله انگلیسی
2 Dynamic multi-objective optimisation using deep reinforcement learning: benchmark, algorithm and an application to identify vulnerable zones based on water quality
بهینه سازی چند منظوره پویا با استفاده از یادگیری تقویت عمیق: معیار ، الگوریتم و برنامه کاربردی برای شناسایی مناطق آسیب پذیر بر اساس کیفیت آب-2019
Dynamic multi-objective optimisation problem (DMOP) has brought a great challenge to the reinforcement learning (RL) research area due to its dynamic nature such as objective functions, constraints and problem parameters that may change over time. This study aims to identify the lacking in the existing benchmarks for multi-objective optimisation for the dynamic environment in the RL settings. Hence, a dynamic multiobjective testbed has been created which is a modified version of the conventional deep-sea treasure (DST) hunt testbed. This modified testbed fulfils the changing aspects of the dynamic environment in terms of the characteristics where the changes occur based on time. To the authors’ knowledge, this is the first dynamic multi-objective testbed for RL research, especially for deep reinforcement learning. In addition to that, a generic algorithm is proposed to solve the multi-objective optimisation problem in a dynamic constrained environment that maintains equilibrium by mapping different objectives simultaneously to provide the most compromised solution that closed to the true Pareto front (PF). As a proof of concept, the developed algorithm has been implemented to build an expert system for a real-world scenario using Markov decision process to identify the vulnerable zones based on water quality resilience in São Paulo, Brazil. The outcome of the implementation reveals that the proposed parity-Q deep Q network (PQDQN) algorithm is an efficient way to optimise the decision in a dynamic environment. Moreover, the result shows PQDQN algorithm performs better compared to the other state-of-the-art solutions both in the simulated and the real-world scenario.
Keywords: Dynamic environment | Reinforcement learning | Deep Q network | Water quality resilience | Meta-policy selection | Artificial intelligence
مقاله انگلیسی
3 Data-driven modeling and prediction of blood glucose dynamics: Machine learning applications in type 1 diabetes
مدل سازی داده محور و پیش بینی پویایی قند خون: کاربردهای یادگیری ماشین در دیابت نوع 1-2019
Background: Diabetes mellitus (DM) is a metabolic disorder that causes abnormal blood glucose (BG) regulation that might result in short and long-term health complications and even death if not properly managed. Currently, there is no cure for diabetes. However, self-management of the disease, especially keeping BG in the recommended range, is central to the treatment. This includes actively tracking BG levels and managing physical activity, diet, and insulin intake. The recent advancements in diabetes technologies and self-management applications have made it easier for patients to have more access to relevant data. In this regard, the development of an artificial pancreas (a closed-loop system), personalized decision systems, and BG event alarms are becoming more apparent than ever. Techniques such as predicting BG (modeling of a personalized profile), and modeling BG dynamics are central to the development of these diabetes management technologies. The increased availability of sufficient patient historical data has paved the way for the introduction of machine learning and its application for intelligent and improved systems for diabetes management. The capability of machine learning to solve complex tasks with dynamic environment and knowledge has contributed to its success in diabetes research. Motivation: Recently, machine learning and data mining have become popular, with their expanding application in diabetes research and within BG prediction services in particular. Despite the increasing and expanding popularity of machine learning applications in BG prediction services, updated reviews that map and materialize the current trends in modeling options and strategies are lacking within the context of BG prediction (modeling of personalized profile) in type 1 diabetes. Objective: The objective of this review is to develop a compact guide regarding modeling options and strategies of machine learning and a hybrid system focusing on the prediction of BG dynamics in type 1 diabetes. The review covers machine learning approaches pertinent to the controller of an artificial pancreas (closed-loop systems), modeling of personalized profiles, personalized decision support systems, and BG alarm event applications. Generally, the review will identify, assess, analyze, and discuss the current trends of machine learning applications within these contexts. Method: A rigorous literature review was conducted between August 2017 and February 2018 through various online databases, including Google Scholar, PubMed, ScienceDirect, and others. Additionally, peer-reviewed journals and articles were considered. Relevant studies were first identified by reviewing the title, keywords, and abstracts as preliminary filters with our selection criteria, and then we reviewed the full texts of the articles that were found relevant. Information from the selected literature was extracted based on predefined categories, which were based on previous research and further elaborated through brainstorming among the authors. Results: The initial search was done by analyzing the title, abstract, and keywords. A total of 624 papers were retrieved from DBLP Computer Science (25), Diabetes Technology and Therapeutics (31), Google Scholar (193), IEEE (267), Journal of Diabetes Science and Technology (31), PubMed/Medline (27), and ScienceDirect (50). After removing duplicates from the list, 417 records remained. Then, we independently assessed and screened the articles based on the inclusion and exclusion criteria, which eliminated another 204 papers, leaving 213 relevant papers. After a full-text assessment, 55 articles were left, which were critically analyzed. The inter-rater agreement was measured using a Cohen Kappa test, and disagreements were resolved through discussion. Conclusion: Due to the complexity of BG dynamics, it remains difficult to achieve a universal model that produces an accurate prediction in every circumstance (i.e., hypo/eu/hyperglycemia e
مقاله انگلیسی
4 pipsCloud: High performance cloud computing for remote sensing big data management and processing
pipsCloud: محاسبات ابری با عملکرد بالا برای سنجش از راه دور مدیریت و پردازش داده های بزرگ-2018
Massive, large-region coverage, multi-temporal, multi-spectral remote sensing (RS) datasets are employed widely due to the increasing requirements for accurate and up-to-date information about resources and the environment for regional and global monitoring. In general, RS data processing involves a complex multi-stage processing sequence, which comprises several independent processing steps according to the type of RS application. RS data processing for regional environmental and disaster monitoring is recognized as being computationally intensive and data intensive. We propose pipsCloud to address these issues in an efficient manner, which combines recent cloud computing and HPC techniques to obtain a large-scale RS data processing system that is suitable for on-demand real-time services. Due to the ubiquity, elasticity, and high-level transparency of the cloud computing model, massive RS data management and data processing for dynamic environmental monitoring can all be performed on the cloud via Web interfaces. A Hilbert-R+-based data indexing method is employed for the optimal querying and access of RS images, RS data products, and interim data. In the core platform beneath the cloud services, we provide a parallel file system for massive high dimensional RS data, as well as interfaces for accessing irregular RS data to improve data locality and optimize the I/O performance. Moreover, we use an adaptive RS data analysis workflow management system for on-demand workflow construction and the collaborative processing of a distributed complex chain of RS data, e.g., for forest fire detection, mineral resources detection, and coastline monitoring. Our experimental analysis demonstrated the efficiency of the pipsCloud platform.
Keywords: Big data ، Cloud computing ، Data-intensive computing ، High performance computing ، Remote sensing
مقاله انگلیسی
5 Efficient incremental high utility pattern mining based on pre-large concept
بهره برداری از معیارهای بالقوه افزایشی بالا بر اساس مفهوم پیش از بزرگ شدن-2018
High utility pattern mining has been actively researched in recent years, because it treats real world databases better than traditional pattern mining approaches. Retail data of markets and web access information data are representative examples of the real world data. However, fundamental high utility pattern mining methods aiming static data are not proper for dynamic data environments. The pre-large concept based methods have efficiency compared to static approaches when dealing with dynamic data. There are several methods dealing with dynamic data based on the pre-large concept, but they have drawbacks that they have to scan original data again and generate many candidate patterns. These two drawbacks are the main issues of performance degradation. To handle these problems, in this paper, we suggest an efficient approach of pre-large concept based incremental utility pattern mining. The proposed method adopts a more proper data structure to mine high utility patterns in incremental environments. The state-of-the-art method performs a database scan operation many times, which is not suitable for incremental environments. However, our method needs only one scan, which is more suitable to process dynamic data compared to the state-of-the-art method. In addition, with the proposed data structure, high utility patterns can be mined in dynamic environments more efficiently than the former method. Experimental results on real datasets and synthetic datasets show that the proposed method has better performance than the former method.
Keywords: Data mining ، High utility patterns ، Incremental mining ، Utility mining ، Pre-large
مقاله انگلیسی
6 A hybrid and learning agent architecture for network intrusion detection
عامل یادگیری معماری ترکیبی برای تشخیص نفوذ شبکه -2017
Learning is an effective way for automating the adaptation of systems to their environment. This ability is especially relevant in dynamic environments as computer networks where new intrusions are constantly emerging, most of them having similarities and occurring frequently. Traditional intrusion detection sys tems still have limitations of adaptability because they are just able to detect intrusions previously set in system design. This paper proposes HyLAA a software agent architecture that combines case-based reasoning, reactive behavior and learning. Through its learning mechanism, HyLAA can adapt itself to its environment and identify new intrusions not previously specified in system design. This is done by learn ing new reactive rules by observing recurrent good solutions to the same perception from the case-based reasoning system, which will be stored in the agent knowledge base. The effectiveness of HyLAA to de tect intrusions using case-based reasoning behavior, the accuracy of the classifier learned by the learning component and both the performance and effectiveness of HyLAA to detect intrusions using hybrid be havior with learning and without learning were evaluated, respectively, by conducting four experiments. In the first experiment, HyLAA exhibited good effectiveness to detect intrusions. In the second experi ment the classifiers learned by the learning component presented high accuracy. Both the hybrid agent behavior with learning and without learning (third and fourth experiment, respectively) presented greater effectiveness and a balance between performance and effectiveness, but only the hybrid behavior showed better effectiveness and performance as long as the agent learns.
Keywords: Learning agents | Hybrid agents | Case-based reasoning | Ontologies | Information security | Intrusion detection systems
مقاله انگلیسی
7 راهکار آگاه از سود و Multi-QoS محدود برای جریان های کاری همزمان در سیستم های ناهمگون
سال انتشار: 2017 - تعداد صفحات فایل pdf انگلیسی: 11 - تعداد صفحات فایل doc فارسی: 32
اجرای یک اپلیکیشن می تواند باعث عدم تعادل جریان کاری یا workload در بین پردازنده های متناظر شود. که در نهایت منجر به هدردهی منابع و هزینه ی بیشتر می شود. در این مقاله یک سیستم مدیریت منابع پویا ارائه می دهیم که در آن پردازنده ها نه برای کار ها بلکه برای زیرمجموعه ای از آنها به نام task استفاده می شود که نرخ استفاده ی منابع را افزایش می دهد. این مقاله الگوریتم زمان بندی ارائه می دهد که جریان های کاری همزمان را مدیریت می کند. منابع بطور همزمان استفاده می شوند و هر کاری دارای ددلاین و بودجه ی مخصوص خود است . تحقیقات اخیر در تلاش برای استراتژی های پویا برای این جریان های کاری بود ولی فقط تعادل در تخصیص منابع و کاهش زمان اجرا را در نظر گرفته بود . الگوریتم زمان بندی آگاه از سود MQ-PAS که در اینجا ارائه شده است سود نهایی با در نظر گرفتن میزان بودجه را افزایش می دهد . ما قابلیت های الگوریتم را نشان می دهیم و با ساختار ها و جریان های کاری مختلف آنرا بررسی می کنیم. نتایج آزمایش ها نشان می دهد که استراتژی ما سوددهی ارائه دهنده را بطور چشمگیری افزیش می دهد و در مقایسه با کار های مشابه نرخ موفقیت بالاتری دارد.
مقاله ترجمه شده
8 Computation partitioning for mobile cloud computing in a big data environment
پارتیشن بندی محاسبات برای محاسبات ابری سیار در یک محیط داده های بزرگ-2017
The growth of mobile cloud computing (MCC) is challenged by the need to adapt to the resources and environment that are available to mobile clients while addressing the dynamic changes in network bandwidth. Big data can be handled via MCC. In this paper, we propose a model of computation partitioning for stateful data in the dynamic environment that will improve performance. First, we constructed a model of stateful data streaming and investigated the method of computation partitioning in a dynamic environment. We developed a definition of direction and calculation of the segmentation scheme, including single frame data flow, task scheduling and executing efficiency. We also defined the problem for a multi-frame data flow calculation segmentation decision that is optimized for dynamic conditions and provided an analysis. Second, we proposed a computation partitioning method for single frame data flow. We determined the data parameters of the application model, the computation partitioning scheme, and the task and work order data stream model. We followed the scheduling method to provide the optimal calculation for data frame execution time after computation partitioning and the best computation partitioning method. Third, we explored a calculation segmentation method for single frame data flow based on multi-frame data using multi-frame data optimization adjustment and prediction of future changes in network bandwidth. We were able to demonstrate that the calculation method for multi-frame data in a changing network bandwidth environment is more efficient than the calculation method with the limitation of calculations for single frame data. Finally, our research verified the effectiveness of single frame data in the application of the data stream and analyzed the performance of the method to optimize the adjustment of multi-frame data. We used a mobile cloud computing platform prototype system for face recognition to verify the effectiveness of the method.
Index Terms: Big data | computation partitioning | data stream | dynamic environment | mobile cloud computing | stateful
مقاله انگلیسی
9 The strategic fit between innovation strategies and business environment in delivering business performance
تناسب استراتژیک بین استراتژی های نوآوری و محیط کسب و کار در ارائه عملکرد کسب و کار-2016
This paper examines the role business environments (in terms of dynamism and competitiveness) as contingency factors which affect the effectiveness of different types of innovation strategies (in terms of product and process) in delivering business performance. Using the data of 207 manufacturing firms in Australia, this study shows that dynamic environments strengthen the effect of product innovation on business performance. Competitive environments, on the other hand, weaken the effect of product innovation on business performance, but strengthen the effect of process innovation on business per- formance. Overall, this study demonstrates the strategic fit between dynamism and product innovation strategy as well as between competitiveness and process innovation strategy. On the other hand, com- petitiveness also shows a strategic mismatch with product innovation. The theoretical and practical implications are discussed.& 2015 Elsevier B.V. All rights reserved.
Keywords: Innovation strategies | Business environment | Strategic fit | Business performance
مقاله انگلیسی
10 تناسب استراتژیک بین استراتژی های نوآوری و محیط کسب و کار در ارائه عملکرد کسب و کار
سال انتشار: 2016 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 34
این مقاله نقش محیط های کسب و کار را (در زمینه های پویایی و رقابتی بودن) به عنوان عوامل وقوع که میزان اثرگذاری انواع مختلف راهبردهای نوآوری (در زمینه های محصول و فرآیند) را در ارائه عملکرد کسب و کار، تحت تاثیر قرار می دهد، بررسی می کند. با استفاده از داده های 207 شرکت ساخت و تولید در استرالیا، این تحقیق نشان می دهد که محیط های پویا، تاثیر نوآوری محصول را بر روی عملکرد کسب و کار، قوی تر می کند. از سوی دیگر، محیط های رقابتی، تاثیر نوآوری محصول را بر روی عملکرد کسب و کار، ضعیف تر می کند، اما تاثیر نوآوری فرآیند را بر روی عملکرد کسب و کار محکم تر می کند. به طور کلی، این تحقیق، وجود تناسب راهبردی بین راهبردهای نوآوری در پویایی و نوآوری درمحصول و نیز بین راهبردهای رقابتی بودن و نوآوری در فرآیند را نشان می دهد. از سوی دیگر، رقابتی بودن همچنین وجود یک ناهماهنگی راهبردی با نوآوری محصول را نشان می دهد. دلالت های تئوریکی و عملی، بحث خواهند شد.
کلیدواژه ها: راهبردهای نوآوری | محیط کسب و کار | تناسب راهبردی | عملکرد کسب و کار
مقاله ترجمه شده
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی