با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
A grid-quadtree model selection method for support vector machines
روش انتخاب مدل شبکه چهارگوش برای ماشینهای بردار پشتیبانی-2020
In this paper, a new model selection approach for Support Vector Machine (SVM), which integrates the quadtree technique with the grid search, denominated grid-quadtree (GQ) is proposed. The developed method is the first in the literature to apply the quadtree for the SVM parameters optimization. The SVM is a machine-learning technique for pattern recognition whose performance relies on its parameters determination. Thus, the model selection problem for SVM is an important field of study and requires expert and intelligent systems to solve it. Real classification data sets involve a huge number of instances and features, and the greater is the training data set dimension, the larger is the cost of a recognition system. The grid search (GS) is the most popular and the simplest method to select parameters for SVM. However, it is time-consuming, which limits its application for big-sized problems. With this in mind, the main idea of this research is to apply the quadtree technique to the GS to make it faster. Hence, this may lower computational time cost for solving problems such as bio-identification, bank credit risk and cancer detection. Based on the asymptotic behaviors of the SVM, it was noticeably observed that the quadtree is able to avoid the GS full search space evaluation. As a consequence, the GQ carries out fewer parameters analysis, solving the same problem with much more efficiency. To assess the GQ performance, ten classification benchmark data set were used. The obtained results were compared with the ones of the traditional GS. The outcomes showed that the GQ is able to find parameters that are as good as the GS ones, executing 78.8124% to 85.8415% fewer operations. This research points out that the adoption of quadtree expressively reduces the computational time of the original GS, making it much more efficient to deal with high dimensional and large data sets.
Keywords: Support vector machine | Parameter determination | Quadtree | Grid search
A systematic survey of computer-aided diagnosis in medicine: Past and present developments
مرور سیستماتیک تشخیص کمک به رایانه در پزشکی: تحولات گذشته و حال-2019
Computer-aided diagnosis (CAD) in medicine is the result of a large amount of effort expended in the interface of medicine and computer science. As some CAD systems in medicine try to emulate the diag- nostic decision-making process of medical experts, they can be considered as expert systems in medicine. Furthermore, CAD systems in medicine may process clinical data that can be complex and/or massive in size. They do so in order to infer new knowledge from data and use that knowledge to improve their diagnostic performance over time. Therefore, such systems can also be viewed as intelligent systems be- cause they use a feedback mechanism to improve their performance over time. The main aim of the literature survey described in this paper is to provide a comprehensive overview of past and current CAD developments. This survey/review can be of significant value to researchers and professionals in medicine and computer science. There are already some reviews about specific aspects of CAD in medicine. How- ever, this paper focuses on the entire spectrum of the capabilities of CAD systems in medicine. It also identifies the key developments that have led to today’s state-of-the-art in this area. It presents an ex- tensive and systematic literature review of CAD in medicine, based on 251 carefully selected publica- tions. While medicine and computer science have advanced dramatically in recent years, each area has also become profoundly more complex. This paper advocates that in order to further develop and im- prove CAD, it is required to have well-coordinated work among researchers and professionals in these two constituent fields. Finally, this survey helps to highlight areas where there are opportunities to make significant new contributions. This may profoundly impact future research in medicine and in select areas of computer science.
Keywords: Computer-aided diagnosis | Computer-aided detection | Expert and intelligent systems | Computerized signal analysis | Segmentation | Classification
An Expert System Gap Analysis and Empirical Triangulation of Individual Differences, Interventions, and Information Technology Applications in Alertness of Railroad Workers
تجزیه و تحلیل شکاف سیستم خبره و مثلث تجربی تفاوت های فردی ، مداخلات و کاربردهای فناوری اطلاعات در هوشیاری کارگران راهآهن-2019
In this abstract we would like to provide some exciting concrete information including the article’s main impact and significance on expert and intelligent systems. The main impact is that the PTC expert intelligent system fills in the gaps between the human and software decision making processes. This gap analysis is analyzed via empirical triangulation of rail worker data collected from its groups, individuals and the rail industry itself. We utilize an expert intelligent system PTC information technology application to both measure and to improve the alertness of the groups and workers in order to improve the overall safety of the railways through reduced human errors and failures to prevent accidents. Many individual differences in alertness among military, railroad, and other industry workers stem from a lack of sufficient sleep. This continues to be a concern in the railroad industry, even with the implementation of positive train control (PTC) expert system technology. Information technology aids such as PTC cannot prevent all accidents, and errors and failures with PTC may occur. Furthermore, drug interventions are a short-term solution for improving alertness. This study investigated the effect of sleep deprivation on the alertness of railroad signalmen at work, individual differences in alertness, and the information technology available to improve alertness. We investigated various information and communication technology control systems that can be used to maintain operational safety in the railroad industry in the face of incompatible circadian rhythms due to irregular hours, weekend work, and night operations. To fully explain individual differences after the adoption of technology, our approach posits the necessary parameters that one must consider for reason-oriented action, sequential updating, feedback, and technology acceptance in a unified model. This triangulation can help manage workers by efficiently increasing their productivity and improving their health. In our analysis we used R statistical software and Tableau. To test our theory, we issued an Apple watch to a locomotive engineer. The perceived usefulness, perceived ease of use, and actual use he reported led to an analysis of his sleep patterns that eventually ended in his adoption of a sleep apnea device and an improvement in his alertness and effectiveness. His adoption of the technology also resulted in a decrease in his use of chemical interventions to increase his alertness. Our model shows that the alertness of signalmen can be predicted. Therefore, we recommend that the alertness of all railroad workers be predicted given the safety limitations of PTC.
Keywords : Sleep Deprivation | Fatigue | Stress | Expert System | Alertness | Empirical Analysis
A review of machine learning algorithms for identification and classification of non-functional requirements
مروری بر الگوریتم های یادگیری ماشین برای شناسایی و طبقه بندی نیازمندی های کاربردی-2019
Context: Recent developments in requirements engineering (RE) methods have seen a surge in using machine-learning (ML) algorithms to solve some difficult RE problems. One such problem is identifi- cation and classification of non-functional requirements (NFRs) in requirements documents. ML-based approaches to this problem have shown to produce promising results, better than those produced by traditional natural language processing (NLP) approaches. Yet, a systematic understanding of these ML approaches is still lacking. Method: This article reports on a systematic review of 24 ML-based approaches for identifying and clas- sifying NFRs. Directed by three research questions, this article aims to understand what ML algorithms are used in these approaches, how these algorithms work and how they are evaluated. Results: (1) 16 different ML algorithms are found in these approaches; of which supervised learning algorithms are most popular. (2) All 24 approaches have followed a standard process in identifying and classifying NFRs. (3) Precision and recall are the most used matrices to measure the performance of these approaches. Finding: The review finds that while ML-based approaches have the potential in the classification and identification of NFRs, they face some open challenges that will affect their performance and practical application. Impact: The review calls for the close collaboration between RE and ML researchers, to address open challenges facing the development of real-world ML systems. Significance: The use of ML in RE opens up exciting opportunities to develop novel expert and intelligent systems to support RE tasks and processes. This implies that RE is being transformed into an application of modern expert systems.
Keywords: Requirements engineering | Non-functional requirements | Requirements documents | Requirements identification Requirements | classification | Machine learning
Mining Twitter data for causal links between tweets and real-world outcomes
استخراج داده های توییتر برای پیوندهای علی بین توییتها و پیامدهای دنیای واقعی-2019
The authors present an expert and intelligent system that (1) identifies influential term groups having causal relationships with real-world enterprise outcomes from Twitter data and (2) quantifies the appro- priate time lags between identified influential term groups and enterprise outcomes. Existing expert and intelligent systems, which are defined as computer systems that imitate the ability of human decision making, could enable computers to identify the spread of Twitter users’ enterprise-related feedback au- tomatically. However, existing expert and intelligent systems have limitations on automatically identifying the causal effects on enterprise outcomes. Identifying the causal effects on enterprise outcomes is impor- tant, because Twitter users’ feedback toward enterprise decisions may have real-world implications. The proposed expert and intelligent system can support decision makers’ decisions considering the real-world effects of identified Twitter users’ feedback on enterprise outcomes. In particular, (1) a co-occurrence net- work analysis model is exploited to discover term candidates for generating influential term groups that are combinations of enterprise-related terms, which potentially influence enterprise outcomes. (2) Time series models and (3) a Granger causality analysis model are then employed to identify influential term groups having causal relationships with enterprise outcomes with the appropriate time lags. Case studies involving a real-world internet video streaming and disc rental provider as well as an airline company are used to test the validity of the proposed expert and intelligent system for both predicting enterprise outcomes in a long period and predicting the effects of specific events on enterprise outcomes in a short period.
Keywords: Expert and intelligent system | Social media | Enterprise outcome | Co-occurrence network | Time series analysis | Granger causality analysis
Specifics of medical data mining for diagnosis aid: A survey
مشخصات داده کاوی پزشکی برای کمک به تشخیص: بررسی-2019
Data mining continues to play an important role in medicine; specifically, for the development of di- agnosis aid models used in expert and intelligent systems. Although we can find abundant research on this topic, clinicians remain reluctant to use decision support tools. Social pressure explains partly this lukewarm position, but concerns about reliability and credibility are also put forward. To address this ret- icence, we emphasize the importance of the collaboration between both data miners and clinicians. This survey lays the foundation for such an interaction, by focusing on the specifics of diagnosis aid, and the related data modeling goals. On this regard, we propose an overview on the requirements expected by the clinicians, who are both the experts and the final users. Indeed, we believe that the interaction with clinicians should take place from the very first steps of the process and throughout the development of the predictive models, thus not only at the final validation stage. Actually, against a current research ap- proach quite blindly driven by data, we advocate the need for a new expert-aware approach. This survey paper provides guidelines to contribute to the design of daily helpful diagnosis aid systems.
Keywords: Data mining | Medicine | Diagnosis aid | Explainable artificial intelligence
Data mining methodology employing artificial intelligence and a probabilistic approach for energy-efficient structural health monitoring with noisy and delayed signals
روش داده کاوی با استفاده از هوش مصنوعی و یک رویکرد احتمالی برای نظارت بر سلامت ساختاری کارآمد با انرژی با سیگنال های پر سر و صدا و تأخیر-2019
Numerous methods have been developed in the context of expert and intelligent systems for structural health monitoring (SHM) with wireless sensor networks (WSNs). However, these techniques have been proven to be efficient when dealing with continuous signals, and the applicability of such expert sys- tems with discrete noisy signals has not yet been explored. This study presents an intelligent data min- ing methodology as part of an expert system developed for SHM with noisy and delayed signals, which are generated by a through-substrate self-powered sensor network. The noted sensor network has been demonstrated as an effective means for minimizing energy consumption in WSNs for SHM. Experimen- tal vibration tests were conducted on a cantilever plate to evaluate the developed expert system for SHM. The proposed data mining method is based on the integration of pattern recognition, an innova- tive probabilistic approach, and machine learning. The novelty of the proposed system for SHM with data interpretation methodology lies in the integration of the noted intelligent techniques on discrete, binary, noisy, and delayed patterns of signals collected from self-powered sensing technology in the applica- tion to a practical engineering problem, i.e., data-driven energy-efficient SHM. Results confirm that the proposed data mining method employing a probabilistic approach can be effectively used to reconstruct delayed and missing signals, thereby addressing the important issue of energy availability for intelligent SHM systems being used for damage identification in civil and aerospace structures. The applicability and effectiveness of the expert system with the data mining approach in detecting damage with noisy sig- nals was demonstrated for plate-like structures with an accuracy of 97%. The present study successfully contributes to advance data mining and signal processing techniques in the SHM domain, indicating a practical application of expert and intelligent systems applied to damage detection in SHM platforms. Findings from this research pave a way for development of the data analysis techniques that can be em- ployed for interpreting noisy and incomplete signals collected from various expert systems such as those being used in intelligent infrastructure monitoring systems and smart cities
Keywords: Structural health monitoring | Data mining | Artificial intelligence | Probabilistic approach | Signal time delay
Hybrid fast unsupervised feature selection for high-dimensional data
انتخاب ویژگی بدون نظارت هیبریدی سریع برای داده های با ابعاد بالا-2019
The emergence of “curse of dimensionality”issue as a result of high reduces datasets deteriorates the ca- pability of learning algorithms, and also requires high memory and computational costs. Selection of fea- tures by discarding redundant and irrelevant features functions as a crucial machine learning technique aimed at reducing the dimensionality of these datasets, which improves the performance of the learning algorithm. Feature selection has been extensively applied in many application areas relevant to expert and intelligent systems, such as data mining and machine learning. Although many algorithms have been developed so far, they are still unsatisfying confronting high-dimensional data. This paper presented a new hybrid filter-based feature selection algorithm based on acombination of clustering and the modi- fied Binary Ant System (BAS), called FSCBAS, to overcome the search space and high-dimensional data processing challenges efficiently. This model provided both global and local search capabilities between and within clusters. In the proposed method, inspired by genetic algorithm and simulated annealing, a damped mutation strategy was introduced that avoided falling into local optima, and a new redundancy reduction policy adopted to estimate the correlation between the selected features further improved the algorithm. The proposed method can be applied in many expert system applications such as microar- ray data processing, text classification and image processing in high-dimensional data to handle the high dimensionality of the feature space and improve classification performance simultaneously. The perfor- mance of the proposed algorithm was compared to that of state-of-the-art feature selection algorithms using different classifiers on real-world datasets. The experimental results confirmed that the proposed method reduced computational complexity significantly, and achieved better performance than the other feature selection methods.
Keywords: Feature selection | High-dimensional data | Binary ant system | Clustering | Mutation
Enhancing batch normalized convolutional networks using displaced rectifier linear units: A systematic comparative study
افزایش شبکه های نرم افزاری بصورت جمع شده با استفاده از واحدهای خطی یکسو کننده جابجا شده: یک مطالعه مقایسه ای سیستماتیک-2019
A substantial number of expert and intelligent systems rely on deep learning methods to solve problems in areas such as economics, physics, and medicine. Improving the accuracy of the activation functions used by such methods can directly and positively impact the overall performance and quality of the mentioned systems at no cost whatsoever. In this sense, enhancing the design of such theoretical fun- damental blocks is of great significance as it immediately impacts a broad range of current and future real-world deep learning based applications. Therefore, in this paper, we turn our attention to the inter- working between the activation functions and the batch normalization, which is practically a mandatory technique to train deep networks currently. We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of us- ing distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models. These Convo- lutional Neural Networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learn- ing computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments ( p < 0.05) showed DReLU enhanced the test accu- racy presented by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance. Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.
Keywords: DReLU | Activation function | Batch normalization | Comparative study | Convolutional Neural Networks | Deep learning
Group fuzzy comprehensive evaluation method under ignorance
روش ارزیابی جامع فازی گروهی تحت جهل-2019
This paper aims at solving such a group fuzzy comprehensive evaluation (FCE) problem that the global or local ignorance may exist in judgments made by experts and the importance degrees of experts are differ- ent. The basic probability assignment (BPA) function is used to extract the expert’s judgment information and the super fuzzy relationship matrices consisting of the individual type and the general type are con- structed by Shafer’s discounting and Dempster’s rule. Then each type of super fuzzy relationship matrix is combined with factor weight set via a specified fuzzy operator and the comprehensive evaluation result that is a belief distribution on the power set of grade levels is obtained. A multi-objective programming model is established to compute the optimal belief distribution on each grade level and an algorithm is summarized to derive the final grade level that the evaluated alternative belongs to. Moreover, the nu- merical comparisons between the proposed method and relevant existing methods are given to clarify the advantages of the proposed method. Finally, an illustrative example is provided to demonstrate the applicability of the proposed method and algorithm. It is worth noting that the proposed method can be easily converted into a core algorithm, which is benefit for developing fuzzy expert system from the perspective of ignorance, and thus it has an important impact and significance on expert and intelligent systems.
Keywords: Fuzzy comprehensive evaluation | Group decision making | Shafer’s discounting | Dempster’s rule | Super fuzzy relationship matrix | Multi-objective programming