الگوریتم تکاملی چند هدفی مبتنی بر شبکه عصبی برای زمانبندی گردش کار پویا در محاسبات ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 16 - تعداد صفحات فایل doc فارسی: 45
زمانبندی گردشکار یک موضوع پژوهشی است که به طور گسترده در محاسبات ابری مورد مطالعه قرار گرفته است و از منابع ابری برای کارهای گردش کار استفاده می¬شود و برای این منظور اهداف مشخص شده در QoS را لحاظ می¬کند. در این مقاله، مسئله زمانبندی گردش کار پویا را به عنوان یک مسئله بهینه سازی چند هدفه پویا (DMOP) مدل می¬کنیم که در آن منبع پویایی سازی بر اساس خرابی منابع و تعداد اهداف است که ممکن است با گذر زمان تغییر کنند. خطاهای نرم افزاری و یا نقص سخت افزاری ممکن است باعث ایجاد پویایی نوع اول شوند. از سوی دیگر مواجهه با سناریوهای زندگی واقعی در محاسبات ابری ممکن است تعداد اهداف را در طی اجرای گردش کار تغییر دهد. در این مطالعه یک الگوریتم تکاملی چند هدفه پویا مبتنی بر پیش بینی را به نام الگوریتم NN-DNSGA-II ارائه می¬دهیم و برای این منظور شبکه عصبی مصنوعی را با الگوریتم NGSA-II ترکیب می¬کنیم. علاوه بر این پنج الگوریتم پویای مبتنی بر غیرپیش بینی از ادبیات موضوعی برای مسئله زمانبندی گردش کار پویا ارائه می¬شوند. راه¬حل¬های زمانبندی با در نظر گرفتن شش هدف یافت می¬شوند: حداقل سازی هزینه ساخت، انرژی و درجه عدم تعادل و حداکثر سازی قابلیت اطمینان و کاربرد. مطالعات تجربی مبتنی بر کاربردهای دنیای واقعی از سیستم مدیریت گردش کار Pegasus نشان می¬دهد که الگوریتم NN-DNSGA-II ما به طور قابل توجهی از الگوریتم¬های جایگزین خود در بیشتر موارد بهتر کار می¬کند با توجه به معیارهایی که برای DMOP با مورد واقعی پارتو بهینه در نظر گرفته می¬شود از جمله تعداد راه¬حل¬های غیرغالب، فاصله¬گذاری Schott و شاخص Hypervolume.
|مقاله ترجمه شده|
Estimating monthly wet sulfur (S) deposition flux over China using an ensemble model of improved machine learning and geostatistical approach
برآورد شار رسوب ماهانه گوگرد مرطوب (S) بر روی چین با استفاده از مدل گروهی از یادگیری ماشین پیشرفته و روش زمین آماری-2019
The wet S deposition was treated as a key issue because it played the negative on the soil acidification, biodiversity loss, and global climate change. However, the limited ground-level monitoring sites make it difficult to fully clarify the spatiotemporal variations of wet S deposition over China. Therefore, an ensemble model of improved machine learning and geostatistical method named fruit fly optimization algorithm-random forestspatiotemporal Kriging (FOA-RF-STK) model was developed to estimate the nationwide S deposition based on the emission inventory, meteorological factors, and other geographical covariates. The ensemble model can capture the relationship between predictors and S deposition flux with the better performance (R2=0.68, root mean square error (RMSE)=7.51 kg ha−1 yr−1) compared with the original RF model (R2=0.52, RMSE=8.99 kg ha−1 yr−1). Based on the improved model, it predicted that the highest and lowest S deposition flux were mainly concentrated on the Southeast China (69.57 kg S ha−1 yr−1) and Inner Mongolia (42.37 kg S ha−1 yr−1), respectively. The estimated wet S deposition flux displayed the remarkably seasonal variation with the highest value in summer (22.22 kg S ha−1 sea−1), follwed by ones in autumn (18.30 kg S ha−1 sea−1), spring (16.27 kg S ha−1 sea−1), and the lowest one in winter (14.71 kg S ha−1 sea−1), which was closely associated with the rainfall amounts. The study provides a novel approach for the S deposition estimation at a national scale.
Keywords: Wet S deposition | Machine learning | Geostatistical approach | China
Problems of engineering entrepreneurship in Africa: A design optimization example in solar thermal engineering
مشکلات کارآفرینی مهندسی در آفریقا: یک نمونه بهینه سازی طراحی در مهندسی حرارتی خورشیدی-2019
This paper addresses Africa’s challenges and opportunities to engineering entrepreneurs. A business environmental scan is done in line with the standard PESTLE analysis, identifying at least twenty generic problems across the continent. Focus is directed to an opportunity in solar water heating, where inadequate electricity supply combines with a plentiful solar resource amidst environmental protection awareness, to make investments potentially worthwhile. Three home level market segments are identified. Key issues in the PESTLE scan are linked with available materials to formulate and solve a design optimization model for these segments. A competition-less product emerges for rural homes. Another – for small urban homes – can be retailed at 50% of current equivalent system prices, and yet, still make profits for the entrepreneur. Both these systems attain average temperatures in excess of 57 C, the fatal level for most pathogenic bacteria. The 3rd and larger system for rich urban homes incorporates a supplementary electric heater that is programmable to kick in half an hour before water withdrawal if solar energy has failed to maintain water temperature above 60 C. The entrepreneur can still make profit if the product retails at 52% of the equivalent competition price.
Keywords: Africa | Design optimization | Engineering entrepreneurship | PESTLE analysis | Solar water heating
On initial population generation in feature subset selection
تولید جمعیت اولیه در انتخاب زیر مجموعه ویژگی-2019
Performance of evolutionary algorithms depends on many factors such as population size, number of generations, crossover or mutation probability, etc. Generating the initial population is one of the impor- tant steps in evolutionary algorithms. A poor initial population may unnecessarily increase the number of searches or it may cause the algorithm to converge at local optima. In this study, we aim to find a promis- ing method for generating the initial population, in the Feature Subset Selection (FSS) domain. FSS is not considered as an expert system by itself, yet it constitutes a significant step in many expert systems. It eliminates redundancy in data, which decreases training time and improves solution quality. To achieve our goal, we compare a total of five different initial population generation methods; Information Gain Ranking (IGR), greedy approach and three types of random approaches. We evaluate these methods using a specialized Teaching Learning Based Optimization searching algorithm (MTLBO-MD), and three super- vised learning classifiers: Logistic Regression, Support Vector Machines, and Extreme Learning Machine. In our experiments, we employ 12 publicly available datasets, mostly obtained from the well-known UCI Machine Learning Repository. According to their feature sizes and instance counts, we manually classify these datasets as small, medium, or large-sized. Experimental results indicate that all tested methods achieve similar solutions on small-sized datasets. For medium-sized and large-sized datasets, however, the IGR method provides a better starting point in terms of execution time and learning performance. Finally, when compared with other studies in literature, the IGR method proves to be a viable option for initial population generation.
Keywords: Feature subset selection | Initial population | Multiobjective optimization
Adsorption characteristics of supercritical CO2/CH4 on different types of coal and a machine learning approach
ویژگی های جذب CO2 / CH4 فوق بحرانی در انواع مختلف ذغال سنگ و رویکرد یادگیری ماشین-2019
The injection of CO2 into deep coal beds can not only improve the recovery of CH4, but also contribute to the geological sequestration of CO2. The adsorption characteristics of coal determine the amount of the greenhouse gas that deep coal seams can store in place. Using self-developed adsorption facility of supercritical fluids, this paper studied the adsorption behavior of supercritical CO2 and CH4 on three types of coal (anthracite, bituminous coal A, bituminous coal B) under different temperatures of 35 °C, 45 °C and 55 °C. The influence of temperature, pressure, and coal rank on the Gibbs excess and absolute/real adsorption amount of supercritical CO2/CH4 on coal samples has been analyzed. Several traditional isotherm models are applied to interpret the experimental data and Langmuir related models are verified to provide good performances. However, these models are limited to isothermal conditions and are highly depended on extensive experiments. To overcome these deficiencies, one innovative adsorption model is proposed based on machine learning methods. This model is applied to the adsorption data of both this paper and four early publications. It was proved to be highly effective in predicting adsorption behavior of a certain type of coal. To further break the limit of coal type, the second optimization model is provided based on published data. Using the second model, one can predict the adsorption behavior of coal based on the fundamental physicochemical parameters of coal. Overall, working directly with the real data, the machine learning technique makes the unified adsorption model become possible, avoiding tedious theoretical assumptions, derivations and strong limitations of the traditional model.
Keywords: Supercritical CO2 | Supercritical CH4 | Coal | Adsorption model | Machine learning
Unsupervised classification of multi-omics data during cardiac remodeling using deep learning
طبقه بندی بدون نظارت شده داده های چند omics در طی بازسازی قلب با استفاده از یادگیری عمیق-2019
Integration of multi-omics in cardiovascular diseases (CVDs) presents high potentials for translational discoveries. By analyzing abundance levels of heterogeneous molecules over time, we may uncover biological interactions and networks that were previously unidentifiable. However, to effectively perform integrative analysis of temporal multi-omics, computational methods must account for the heterogeneity and complexity in the data. To this end, we performed unsupervised classification of proteins and metabolites in mice during cardiac remodeling using two innovative deep learning (DL) approaches. First, long short-term memory (LSTM)- based variational autoencoder (LSTM-VAE) was trained on time-series numeric data. The low-dimensional embeddings extracted from LSTM-VAE were then used for clustering. Second, deep convolutional embedded clustering (DCEC) was applied on images of temporal trends. Instead of a two-step procedure, DCEC performes a joint optimization for image reconstruction and cluster assignment. Additionally, we performed K-means clustering, partitioning around medoids (PAM), and hierarchical clustering. Pathway enrichment analysis using the Reactome knowledgebase demonstrated that DL methods yielded higher numbers of significant biological pathways than conventional clustering algorithms. In particular, DCEC resulted in the highest number of enriched pathways, suggesting the strength of its unified framework based on visual similarities. Overall, unsupervised DL is shown to be a promising analytical approach for integrative analysis of temporal multi-omics.
Keywords: Cardiovascular | Clustering | Multi-omics Time-series | Unsupervised deep learning | Integrative analysis
An efficient simulation optimization methodology to solve a multi-objective problem in unreliable unbalanced production lines
یک روش بهینه سازی شبیه سازی کارآمد برای حل یک مشکل چند هدف در خطوط تولید نامتوازن غیرقابل اعتماد-2019
This research develops an expert system to addresses a novel problem in the literature of buffer allo- cation and production lines. We investigate real-world unreliable unbalanced production lines where all time-based parameters are probabilistic including time between parts arrivals, processing times, time be- tween failures, repairing times, and setup times. The main contributions of the paper are a twofold. First and foremost, the mean processing times of workstations and buffer capacities, unlike the existing litera- ture, are considered as decision variables in a multi-objective optimization problem which maximizes the throughput rate and minimizes the total buffer capacities as well as the total cost of the mean process time reductions. Secondly, an efficient methodology is developed that can precisely reflect a real-world system without any unrealistic and/or restrictive assumptions on the probabilistic nature of the system, which are commonly assumed in the existing literature. One of the greatest challenges in this research is to estimate the throughput rate function since it highly depends on the random behavior of the sys- tem. Thus, a simulation optimization approach is developed based on the Design of Experiments and Re- sponse Surface Methodology to fit a regression model for throughput rate. Finally, Non-dominated Sorting Genetic Algorithm (NSGA-II) and Non-dominated Ranked Genetic Algorithm (NRGA) are used to gener- ate high-quality solutions for the aforementioned problem. This methodology is run on a real numerical case. The experimental results confirm the advantages of the proposed methodology. This methodology is an innovative expert system with a knowledge-base developed through this simulation optimization approach. This expert system can be applied to complex production line problems in large or small scale with different types of decision variables and objective functions. The application of this expert system is transformative to other manufacturing systems.
Keywords: Unreliable unbalanced production lines | Buffer allocation problem | Simulation optimization | Design of experiments | Response surface methodology | Meta-heuristics
Globally-biased BIRECT algorithm with local accelerators for expensive global optimization
الگوریتم BIRECT مغرضانه جهانی با شتاب دهنده های محلی برای بهینه سازی جهانی ارزشمند-2019
In this paper, black-box global optimization problem with expensive function evaluations is considered. This problem is challenging for numerical methods due to the practical limits on computational budget often required by intelligent systems. For its efficient solution, a new DIRECT-type hybrid technique is proposed. The new algorithm incorporates a novel sampling on diagonals and bisection strategy (instead of a trisection which is commonly used in the existing DIRECT-type algorithms), embedded into the globally-biased framework, and enriched with three different local minimization strategies. The numerical results on a test set of almost 900 problems from the literature and on a real-life application regarding nonlinear regression show that the new approach effectively addresses well-known DIRECT weaknesses, has beneficial effects on the overall performance, and on average, gives significantly better results compared to several DIRECT-type methods widely used in decision-making expert systems.
Keywords: Nonlinear global optimization| DIRECT-type algorithms | BIRECT algorithm | hybrid optimization algorithms | nonlinear regression
Development of accurate human head models for personalized electromagnetic dosimetry using deep learning
توسعه مدل های دقیق سر انسان برای دوزیمتری الکترومغناطیسی شخصی با استفاده از یادگیری عمیق-2019
The development of personalized human head models from medical images has become an important topic in the electromagnetic dosimetry field, including the optimization of electrostimulation, safety assessments, etc. Human head models are commonly generated via the segmentation of magnetic resonance images into different anatomical tissues. This process is time consuming and requires special experience for segmenting a relatively large number of tissues. Thus, it is challenging to accurately compute the electric field in different specific brain regions. Recently, deep learning has been applied for the segmentation of the human brain. However, most studies have focused on the segmentation of brain tissue only and little attention has been paid to other tissues, which are considerably important for electromagnetic dosimetry. In this study, we propose a new architecture for a convolutional neural network, named ForkNet, to perform the segmentation of whole human head structures, which is essential for evaluating the electrical field distribution in the brain. The proposed network can be used to generate personalized head models and applied for the evaluation of the electric field in the brain during transcranial magnetic stimulation. Our computational results indicate that the head models generated using the proposed network exhibit strong matching with those created via manual segmentation in an intra-scanner segmentation task.
Keywords: convolutional neural network | Deep learning | Image segmentation | Transcranial magnetic stimulation
Design and implementation of the fuzzy expert system in Monte Carlo methods for fuzzy linear regression
طراحی و اجرای سیستم خبره فازی در روش های مونت کارلو برای رگرسیون خطی فازی-2019
In this study, fuzzy expert system (FES) in Monte Carlo (MC) method, which is used for estimating fuzzy linear regression model (FLRM) parameters, is applied to determine the parameter intervals, for the first time in the literature. MC method in estimating FLRM parameters is a new field of study that is very useful and time saving. However a major problem might occur in determining the parameter intervals from which the regression model parameters are supposed to come. If the intervals are calculated too large, FLRM error will be very large. Accordingly, the actual model parameters will not be obtained if the intervals are calculated too narrow. This drawback has not been addressed in the literature before and only optimization methods have been applied to achieve the best interval values. In this article, the FES is used for the first time in order to solve the problem in parameter estimation process for the FLRM in the field of statistics. For this purpose, the difference between the fuzzy observation value and fuzzy estimation value’s support set (W) is taken into account. The most appropriate intervals calculated for the parameters are those that make W as small as possible. Thus, FES is designed to determine the best intervals for the model parameters. The system knowledge base is composed of 7 fuzzy rules. As a result, it is deduced that the FLRM parameter estimates obtained from the MC method using FES are very close to the real values. The real impact of this paper will be in showing the applicability of FESs in order to solve problems that we encounter in the field of statistics by the help of linguistic expressions. Moreover, these outcomes will be useful for enriching the studies that have already focused on FLRMs and will encourage researchers to use FES to solve problems in statistics. To sum up, this study demonstrates that FESs which is used in technological devices and makes our lives easier can also be used in solving problems that we confront in the field of statistics efficiently with using linguistic expressions like human inference system.
Keywords: Fuzzy expert system | Fuzzy linear regression | Monte Carlo