با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
A knowledge-based expert system to assess power plant project cost overrun risks
یک سیستم خبره مبتنی بر دانش برای ارزیابی هزینه ریسک بیش ازحد پروژه نیروگاهی-2019
Preventing cost overruns of such infrastructure projects as power plants is a global project management problem. The existing risk assessment methods/models have limitations to address the complicated na- ture of these projects, incorporate the probabilistic causal relationships of the risks and probabilistic data for risk assessment, by taking into account the domain experts’ judgments, subjectivity, and un- certainty involved in their judgments in the decision making process. A knowledge-based expert system is presented to address this issue, using a fuzzy canonical model (FCM) that integrates the fuzzy group decision-making approach (FGDMA) and the Canonical model ( i.e. a modified Bayesian belief network model) . The FCM overcomes: (a) the subjectivity and uncertainty involved in domain experts’ judgment, (b) sig- nificantly reduces the time and effort needed for the domain experts in eliciting conditional probabilities of the risks involved in complex risk networks, and (c) reduces the model development tasks, which also reduces the computational load on the model. This approach advances the applications of fuzzy-Bayesian models for cost overrun risks assessment in a complex and uncertain project environment by addressing the major constraints associated with such models. A case study demonstrates and tests the application of the model for cost overrun risk assessment in the construction and commissioning phase of a power plant project, confirming its ability to pinpoint the most critical risks involved ̶ in this case, the complex- ity of the lifting and rigging heavy equipment, inadequate work inspection and testing plan, inadequate site/soil investigation, unavailability of the resources in the local market, and the contractor’s poor plan- ning and scheduling.
Keywords: Cost overruns | Risk assessment | Power plant projects | Fuzzy logic | Canonical model
Data-based structure selection for unified discrete grey prediction model
Data-based structure selection for unified discrete grey prediction model-2019
Grey models have been reported to be promising for time series prediction with small samples, but the diversity kinds of model structures and modelling assumptions restrains their further applications and developments. In this paper, a novel grey prediction model, named discrete grey polynomial model, is proposed to unify a family of univariate discrete grey models. The proposed model has the capacity to represent most popular homogeneous and non-homogeneous discrete grey models and furthermore, it can induce some other novel models, thereby highlighting the relationship between the models and their structures and assumptions. Based on the proposed model, a data-based algorithm is put forward to se- lect the model structure adaptively. It reduces the requirement for modeler’s knowledge from an expert system perspective. Two numerical experiments with large-scale simulations are conducted and the re- sults show its effectiveness. In the end, two real case tests show that the proposed model benefits from its adaptive structure and produces reliable multi-step ahead predictions.
Keywords: Grey system theory | Discrete grey model | Structure selection | Matrix decomposition
In this paper, a novel problem in transshipment networks has been proposed. The main aims of this pa- per are to introduce the problem and to give useful tools for solving it both in exact and approximate ways. In a transshipment network it is important to decide which are the best paths between each pair of nodes. Representing the network by a graph, the union of thesepaths is a delivery subgraph of the original graph which has all the nodes and some edges. Nodes in this subgraph which are adjacent to more than two nodes are called switches because when sending the flow between any pair of nodes, switches on the path must adequately direct it. Switches are facilities which direct flows among users. The installation of a switch involves the installation of adequate equipment and thus an allocation cost. Furthermore, traversing a switch also implies a service cost or allocation cost. The Switch Location Prob- lem is defined as the problem of determining which is the delivery subgraph with the total lowest cost. Two of the three solutions approaches that we propose are decomposition algorithms based on articula- tion vertices, the exact and the math-heuristic ones. These two approaches could be embedded in expert systems for locating switches in transshipment networks. The results should help a decision maker to select the adequate approach depending on the shape and size of the network and also on the exter- nal time-limit. Our results show that the exact approach is a valuable tool if the network has less than 10 0 0 nodes. Two upsides of our heuristics are that they do not require special networks and give good solutions, gap-wise. The impact of this paper is twofold: it highlights the difficulty of adequately locating switches and it emphasizes the benefit of decomposing algorithms.
Keywords: Discrete location | Math-heuristic | Articulation vertex | Block-Cutpoint graph
TAPSTROKE: A novel intelligent authentication system using tap frequencies
TAPSTROKE: رویکرد سیستم احراز هویت هوشمند با استفاده از فرکانسهای آهسته-2019
Emerging security requirements lead to new validation protocols to be implemented to recent authen- tication systems by employing biometric traits instead of regular passwords. If an additional security is required in authentication phase, keystroke recognition and classification systems and related interfaces are very promising for collecting and classifying biometric traits. These systems generally operate in time- domain; however, the conventional time-domain solutions could be inadequate if a touchscreen is so small to enter any kind of alphanumeric passwords or a password consists of one single character like a tap to the screen. Therefore, we propose a novel frequency-based authentication system, TAPSTROKE, as a prospective protocol for small touchscreens and an alternative authentication methodology for existing devices. We firstly analyzed the binary train signals formed by tap passwords consisting of taps instead of alphanumeric digits by the regular (STFT) and modified short time Fourier transformations (mSTFT). The unique biometric feature extracted from a tap signal is the frequency-time localization achieved by the spectrograms which are generated by these transformations. The touch signals, generated from the same tap-password, create significantly different spectrograms for predetermined window sizes. Finally, we conducted several experiments to distinguish future attempts by one-class support vector machines (SVM) with a simple linear kernel for Hamming and Blackman window functions. The experiments are greatly encouraging that we achieved 1.40%–2.12% and 2.01%–3.21% equal error rates (EER) with mSTFT; while with regular STFT the classifiers produced quite higher EER, 7.49%–11.95% and 6.93%–10.12%, with Hamming and Blackman window functions, separately. The whole methodology, as an expert system for protecting the users from fraud attacks sheds light on new era of authentication systems for future smart gears and watches.
Keywords: Tapstroke | Keystroke | Authentication | Biometrics | Frequency | Short time Fourier transformation | Support vector machines
On initial population generation in feature subset selection
تولید جمعیت اولیه در انتخاب زیر مجموعه ویژگی-2019
Performance of evolutionary algorithms depends on many factors such as population size, number of generations, crossover or mutation probability, etc. Generating the initial population is one of the impor- tant steps in evolutionary algorithms. A poor initial population may unnecessarily increase the number of searches or it may cause the algorithm to converge at local optima. In this study, we aim to find a promis- ing method for generating the initial population, in the Feature Subset Selection (FSS) domain. FSS is not considered as an expert system by itself, yet it constitutes a significant step in many expert systems. It eliminates redundancy in data, which decreases training time and improves solution quality. To achieve our goal, we compare a total of five different initial population generation methods; Information Gain Ranking (IGR), greedy approach and three types of random approaches. We evaluate these methods using a specialized Teaching Learning Based Optimization searching algorithm (MTLBO-MD), and three super- vised learning classifiers: Logistic Regression, Support Vector Machines, and Extreme Learning Machine. In our experiments, we employ 12 publicly available datasets, mostly obtained from the well-known UCI Machine Learning Repository. According to their feature sizes and instance counts, we manually classify these datasets as small, medium, or large-sized. Experimental results indicate that all tested methods achieve similar solutions on small-sized datasets. For medium-sized and large-sized datasets, however, the IGR method provides a better starting point in terms of execution time and learning performance. Finally, when compared with other studies in literature, the IGR method proves to be a viable option for initial population generation.
Keywords: Feature subset selection | Initial population | Multiobjective optimization
Combining hierarchical clustering approaches using the PCA method
ترکیب روشهای خوشه بندی سلسله مراتبی با استفاده از روش PCA-2019
In expert systems, data mining methods are algorithms that simulate humans’ problem-solving capabil- ities. Clustering methods as unsupervised machine learning methods are crucial approaches to catego- rize similar samples in the same categories. The use of different clustering algorithms to a given dataset produces clusters with different qualities. Hence, many researchers have applied clustering combination methods to reduce the risk of choosing an inappropriate clustering algorithm. In these methods, the out- puts of several clustering algorithms are combined. In these research works, the input hierarchical clus- terings are transformed to descriptor matrices and their combination is achieved by aggregating their descriptor matrices. In previous works, only element-wise aggregation operators have been used and the relation between the elements of each descriptor matrix has been ignored. However, the value of each element of the descriptor matrix is meaningful in comparison with its other elements. The current study proposes a novel method of combining hierarchical clustering approaches based on principle component analysis (PCA). PCA as an aggregator allows considering all elements of the descriptor matrices. In the proposed approach, basic clusters are made and transformed to descriptor matrices. Then, a final ma- trix is extracted from the descriptor matrices using PCA. Next, a final dendrogram is constructed from the matrix that is used to summarize the results of the diverse clustering. The experimental results on popular available datasets show the superiority of the clustering accuracy of the proposed method over basic clustering methods such as single, average and centroid linkage and previously combined hierar- chical clustering methods. In addition, statistical tests show that the proposed method significantly out- performed hierarchical clustering combination methods with element-wise averaging operators in almost all tested datasets. Several experiments have also been conducted which confirm the robustness of the proposed method for its parameter setting.
Keywords: Clustering | Hierarchical clustering | Principle component analysis | PCA
Double Q-PID algorithm for mobile robot control
الگوریتم دابل Q-PID برای کنترل ربات های موبایل-2019
Many expert systems have been developed for self-adaptive PID controllers of mobile robots. However, the high computational requirements of the expert systems layers, developed for the tuning of the PID controllers, still require previous expert knowledge and high efficiency in algorithmic and software exe- cution for real-time applications. To address these problems, in this paper we propose an expert agent- based system, based on a reinforcement learning agent, for self-adapting multiple low-level PID con- trollers in mobile robots. For the formulation of the artificial expert agent, we develop an incremental model-free algorithm version of the double Q -Learning algorithm for fast on-line adaptation of multi- ple low-level PID controllers. Fast learning and high on-line adaptability of the artificial expert agent is achieved by means of a proposed incremental active-learning exploration-exploitation procedure, for a non-uniform state space exploration, along with an experience replay mechanism for multiple value functions updates in the double Q -learning algorithm. A comprehensive comparative simulation study and experiments in a real mobile robot demonstrate the high performance of the proposed algorithm for a real-time simultaneous tuning of multiple adaptive low-level PID controllers of mobile robots in real world conditions.
Keywords: Reinforcement learning | Double Q -learning | Incremental learning | Double Q-PID | Mobile robots | Multi-platforms
Neural trees with peer-to-peer and server-to-client knowledge transferring models for high-dimensional data classification
درختان عصبی با دانش همتا به همتا و سرور به مشتری انتقال مدل برای طبقه بندی داده های بعدی-2019
Classification of the high-dimensional data by a new expert system is followed in the current paper. The proposed system defines some non-disjoint clusters of highly relevant features with the least inner- redundancy. For each cluster, a neural tree is implemented exploiting an Extreme Learning Machine (ELM) together an inference engine in any node. The derived classification rules from ELM are stored in the rule- base of the inference engine to recognize the classes. A majority voting is used to unify the results of the different neural trees. This structure is refereed as the Forest of Extreme Learning Machines with Rule- base Transferring (FELM-RT). The contribution of FELM-RT is to decrease the duplicated computations by using two novel interaction models between the neural trees. In the first interaction model, namely Peer- to-Peer (P2P) model, each node can share its rule-base with the other nodes of the various neural trees. In the second that is referred as Server-to-Client (S2C) model, a neural tree that works on a cluster with the best relevancy and redundancy, shares the rules with the other neural trees. In both of the models, a fuzzy aggregation technique is used to adjust the certainty of the rules. The processing time of FELM-RT decreases essentially and it improves the classification accuracy. The high results of F-measure and G- mean, show that FELM-RT classifies the high-dimensional datasets without over-fitting. The comparison between FELM-RT and some state-of-the-art classifiers reveals that FELM-RT overcomes them specially on the datasets with more than 3 million features.
Keywords: Neural tree | Rule-base transferring | Feature clustering | Extreme learning machine | Communication models
Setting up standards: A methodological proposal for pediatric Triage machine learning model construction based on clinical outcomes
تنظیم استانداردها: یک پیشنهاد روش شناختی برای ساخت مدل یادگیری ماشین تراشی کودکان براساس نتایج بالینی-2019
Triage is a critical process in hospital emergency departments (ED). Specifically, we consider how to achieve fast and accurate patient Triage in the ED of a pediatric hospital. The goal of this paper is to establish methodological best practices for the application of machine learning (ML) to Triage in pediatric ED, providing a comprehensive comparison of the performance of ML techniques over a large dataset. Our work is among the first attempts in this direction. Following very recent works in the literature, we use the clinical outcome of a case as its label for supervised ML model training, instead of the more uncertain labels provided by experts. The experimental dataset contains the records along 3 years of operation of the hospital ED. It consists of 189,718 patients visits to the hospital. The clinical outcome of 9271 cases (4.98%) wa hospital admission, therefore our dataset is highly class imbalanced. Our reported performance comparison results focus on four ML models: Deep Learning (DL), Random Forest (RF), Naive Bayes (NB) and Support Vector Machines (SVM). Data preprocessing includes class imbalance correction, and case re-labeling. We use different well known metrics to evaluate performance of ML models in three different experimental settings: (a) classification of each case into the standard five Triage urgency levels, (b) discrimination of high versus low case severity according to its clinical outcome, and (c) comparison of the number of patients assigned to each standard Triage urgency level against the Triage rule based expert system currently in use at the hospital. RF achieved greater AUC, accuracy, PPV and specificity than the other models in the dychotomic classification experiments. On the implementation side, our study shows that ML predictive models trained according to clinical outcomes, provide better Triage performance than the current rule based expert system in operation at the hospital.
Keywords: Machine learning | Emergency department | Triage | Data science | Clinical decision support systems
Analytical games for knowledge engineering of expert systems in support to Situational Awareness: The Reliability Game case study
بازی های تحلیلی برای مهندسی دانش سیستم های خبره در حمایت از آگاهی وضعیتی: مطالعه موردی بازی اطمینان-2019
Knowledge Acquisition (KA) methods are of paramount importance in the design of intelligent systems. Research is ongoing to improve their effectiveness and efficiency. Analytical games appear to be a promis- ing tool to support KA. In fact, in this paper we describe how analytical games could be used for Knowl- edge Engineering of Bayesian networks, through the presentation of the case study of the Reliability Game. This game has been developed with the aim of collecting data on the impact of meta-knowledge about sources of information upon human Situational Assessment in a maritime context. In this paper we describe the computational model obtained from the dataset and how the card positions, which reflect a player belief, can be easily converted in subjective probabilities and used to learn latent constructs, such as the source reliability, by applying the Expectation-Maximisation algorithm.
Keywords: Source reliability | Expert knowledge | Knowledge acquisition | Bayesian networks | Parameter learning | Analytical game