دانلود و نمایش مقالات مرتبط با Neural networks::صفحه 1
سیزه به در
نتیجه جستجو - Neural networks

تعداد مقالات یافته شده: 206
ردیف عنوان نوع
1 Towards a real-time processing framework based on improved distributed recurrent neural network variants with fastText for social big data analytics
به سمت یک چارچوب پردازش در زمان واقعی بر اساس بهبود انواع شبکه عصبی مکرر توزیع شده با fastText برای تجزیه و تحلیل داده های بزرگ اجتماعی-2020
Big data generated by social media stands for a valuable source of information, which offers an excellent opportunity to mine valuable insights. Particularly, User-generated contents such as reviews, recommendations, and users’ behavior data are useful for supporting several marketing activities of many companies. Knowing what users are saying about the products they bought or the services they used through reviews in social media represents a key factor for making decisions. Sentiment analysis is one of the fundamental tasks in Natural Language Processing. Although deep learning for sentiment analysis has achieved great success and allowed several firms to analyze and extract relevant information from their textual data, but as the volume of data grows, a model that runs in a traditional environment cannot be effective, which implies the importance of efficient distributed deep learning models for social Big Data analytics. Besides, it is known that social media analysis is a complex process, which involves a set of complex tasks. Therefore, it is important to address the challenges and issues of social big data analytics and enhance the performance of deep learning techniques in terms of classification accuracy to obtain better decisions. In this paper, we propose an approach for sentiment analysis, which is devoted to adopting fastText with Recurrent neural network variants to represent textual data efficiently. Then, it employs the new representations to perform the classification task. Its main objective is to enhance the performance of well-known Recurrent Neural Network (RNN) variants in terms of classification accuracy and handle large scale data. In addition, we propose a distributed intelligent system for real-time social big data analytics. It is designed to ingest, store, process, index, and visualize the huge amount of information in real-time. The proposed system adopts distributed machine learning with our proposed method for enhancing decision-making processes. Extensive experiments conducted on two benchmark data sets demonstrate that our proposal for sentiment analysis outperforms well-known distributed recurrent neural network variants (i.e., Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Gated Recurrent Unit (GRU)). Specifically, we tested the efficiency of our approach using the three different deep learning models. The results show that our proposed approach is able to enhance the performance of the three models. The current work can provide several benefits for researchers and practitioners who want to collect, handle, analyze and visualize several sources of information in real-time. Also, it can contribute to a better understanding of public opinion and user behaviors using our proposed system with the improved variants of the most powerful distributed deep learning and machine learning algorithms. Furthermore, it is able to increase the classification accuracy of several existing works based on RNN models for sentiment analysis.
Keywords: Big data | FastText | Recurrent neural networks | LSTM | BiLSTM | GRU | Natural language processing | Sentiment analysis | Social big data analytics
مقاله انگلیسی
2 الگوریتم تکاملی چند هدفه مبتنی بر شبکه عصبی برای زمانبندی گردش کار پویا در محاسبات ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 16 - تعداد صفحات فایل doc فارسی: 45
زمانبندی گردشکار یک موضوع پژوهشی است که به طور گسترده در محاسبات ابری مورد مطالعه قرار گرفته است و از منابع ابری برای کارهای گردش کار استفاده می¬شود و برای این منظور اهداف مشخص شده در QoS را لحاظ می¬کند. در این مقاله، مسئله زمانبندی گردش کار پویا را به عنوان یک مسئله بهینه سازی چند هدفه پویا (DMOP) مدل می¬کنیم که در آن منبع پویایی سازی بر اساس خرابی منابع و تعداد اهداف است که ممکن است با گذر زمان تغییر کنند. خطاهای نرم افزاری و یا نقص سخت افزاری ممکن است باعث ایجاد پویایی نوع اول شوند. از سوی دیگر مواجهه با سناریوهای زندگی واقعی در محاسبات ابری ممکن است تعداد اهداف را در طی اجرای گردش کار تغییر دهد. در این مطالعه یک الگوریتم تکاملی چند هدفه پویا مبتنی بر پیش بینی را به نام الگوریتم NN-DNSGA-II ارائه می¬دهیم و برای این منظور شبکه عصبی مصنوعی را با الگوریتم NGSA-II ترکیب می¬کنیم. علاوه بر این پنج الگوریتم پویای مبتنی بر غیرپیش بینی از ادبیات موضوعی برای مسئله زمانبندی گردش کار پویا ارائه می¬شوند. راه¬حل¬های زمانبندی با در نظر گرفتن شش هدف یافت می¬شوند: حداقل سازی هزینه ساخت، انرژی و درجه عدم تعادل و حداکثر سازی قابلیت اطمینان و کاربرد. مطالعات تجربی مبتنی بر کاربردهای دنیای واقعی از سیستم مدیریت گردش کار Pegasus نشان می¬دهد که الگوریتم NN-DNSGA-II ما به طور قابل توجهی از الگوریتم¬های جایگزین خود در بیشتر موارد بهتر کار می¬کند با توجه به معیارهایی که برای DMOP با مورد واقعی پارتو بهینه در نظر گرفته می¬شود از جمله تعداد راه¬حل¬های غیرغالب، فاصله¬گذاری Schott و شاخص Hypervolume.
مقاله ترجمه شده
3 Predicting and explaining corruption across countries: A machine learning approach
پیش بینی و توضیح فساد در سراسر کشور: رویکرد یادگیری ماشینی-2020
In the era of Big Data, Analytics, and Data Science, corruption is still ubiquitous and is perceived as one of the major challenges of modern societies. A large body of academic studies has attempted to identify and explain the potential causes and consequences of corruption, at varying levels of granularity, mostly through theoretical lenses by using correlations and regression-based statistical analyses. The present study approaches the phenomenon from the predictive analytics perspective by employing contemporary machine learning techniques to discover the most important corruption perception predictors based on enriched/enhanced nonlinear models with a high level of predictive accuracy. Specifically, within the multiclass classification modeling setting that is employed herein, the Random Forest (an ensemble-type machine learning algorithm) is found to be the most accurate prediction/classification model, followed by Support Vector Machines and Artificial Neural Networks. From the practical standpoint, the enhanced predictive power of machine learning algorithms coupled with a multi-source database revealed the most relevant corruption-related information, contributing to the related body of knowledge, generating actionable insights for administrator, scholars, citizens, and politicians. The variable importance results indicated that government integrity, property rights, judicial effectiveness, and education index are the most influential factors in defining the corruption level of significance
Keywords: Corruption perception | Machine learning | Predictive modeling | Random forest | Society policies and regulations |Government integrity | Social development
مقاله انگلیسی
4 Exploiting the stimuli encoding scheme of evolving Spiking Neural Networks for stream learning
بهره برداری از برنامه رمزگذاری محرک های در حال تکامل شبکه های عصبی Spiking برای یادگیری جریان-2020
Stream data processing has lately gained momentum with the arrival of new Big Data scenarios and applications dealing with continuously produced information flows. Unfortunately, traditional machine learning algorithms are not prepared to tackle the specific challenges imposed by data stream processing, such as the need for learning incrementally, limited memory and processing time requirements, and adaptation to non-stationary data, among others. To face these paradigms, Spiking Neural Networks have emerged as one of the most promising stream learning techniques, with variants such as Evolving Spiking Neural Networks capable of efficiently addressing many of these challenges. Interestingly, these networks resort to a particular population encoding scheme – Gaussian Receptive Fields – to transform the incoming stimuli into temporal spikes. The study presented in this manuscript sheds light on the predictive potential of this encoding scheme, focusing on how it can be applied as a computationally lightweight, model-agnostic preprocessing step for data stream learning. We provide informed intuition to unveil under which circumstances the aforementioned population encoding method yields effective prediction gains in data stream classification with respect to the case where no preprocessing is performed. Results obtained for a variety of stream learning models and both synthetic and real stream datasets are discussed to empirically buttress the capability of Gaussian Receptive Fields to boost the predictive performance of stream learning methods, spanning further research towards extrapolating our findings to other machine learning problems.
Keywords: Stream learning | Gaussian receptive fields | Population encoding | Evolving Spiking Neural Networks
مقاله انگلیسی
5 Neural network aided development of a semi-empirical interatomic potential for titanium
شبکه عصبی به توسعه پتانسیل متقابل نیمه تجربی تیتانیوم کمک کرده است-2020
Artificial neural networks, utilizing machine learning techniques to uncover subtle and complex patterns in big data problems, are able to condense large amounts of computationally expensive density functional theory and ab initio results into classical force field potentials. However, in order to produce a computationally efficient network, with minimal network architecture, a structural fingerprint whose components are highly correlated to the per atom energy is necessary. In this paper, we demonstrate the effectiveness a structural fingerprint motivated by the highly successful MEAM formalism by creating an artificial neural network containing a single hidden layer of 20 nodes which provides a semi-empirical force field potential for elemental titanium. This potential is suitable for dynamic calculations of α-, β-, and ω-titanium at a variety of temperatures. This potential is able to achieve a number of results in agreement with DFT calculations which surpass classical potential formalisms with comparable computational performance.
Keywords: Machine learning | Neural networks | Titanium
مقاله انگلیسی
6 Intelligent-ReaxFF: Evaluating the reactive force field parameters with machine learning
Intelli-ReaxFF: ارزیابی پارامترهای میدان نیروی واکنش با یادگیری ماشین-2020
Machine learning has been widely used in quantum chemistries, such as data mining in quantum mechanics calculation and representations of potential energy surface by neural networks. In this study, we report our efforts on the optimization of the ReaxFF parameters with machine learning frameworks. Although deep neural network potentials like High-Dimensional Neural Network Potentials (HDNNP) have achieved much success in applications such as materials modeling, factors like the memory usage, training time, and accuracies are still problems when the training data set is big. On the other hand, classical potentials like ReaxFF and REBO does not have these problems, and a combination of two is an ideal solution. Machine learning has generated techniques such as automatic differentiation and backpropagation, with which we can optimize deep neural networks or complexed interatomic potentials like ReaxFF. With the TensorFlow coding platform, we have constructed an Intelligent ReaxFF (I-ReaxFF) model with terms of matrix (or tensor) operations that can optimize ReaxFF parameters automatically with gradient-based optimizers like adaptive moment solver (Adam) and backpropagations. As inherited from TensorFlow, one significant feature of our code is the GPU acceleration. The training speed can be five times faster with GPU acceleration than pure CPU calculation. Another feather is that it can directly use the ab initio molecular dynamics trajectories with surrounding periodic images as training data, therefore, allowing the data set can be prepared with ease.
Keywords: Neural network | Parameterization | ReaxFF | Materials modeling | Machine learning
مقاله انگلیسی
7 Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach
پیش بینی پیش بینی پایگاه داده های سری زمانی با استفاده از شبکه های عصبی مکرر در گروه های مشابه سری: یک روش خوشه بندی-2020
With the advent of Big Data, nowadays in many applications databases containing large quantities of sim- ilar time series are available. Forecasting time series in these domains with traditional univariate fore- casting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks (RNNs), and in particular Long Short Term Memory (LSTM) networks, have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context, when trained across all available time series. However, if the time series database is heterogeneous, ac- curacy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model that can be used with different types of RNN models on subgroups of similar time series, which are identified by time series clustering techniques. We assess our proposed methodology using LSTM networks, a widely popular RNN variant, together with various clustering algorithms, such as kMeans, DBScan, Partition Around Medoids (PAM), and Snob. Our method achieves competitive results on benchmarking datasets under competition evaluation procedures. In particular, in terms of mean sMAPE accuracy it consistently outperforms the baseline LSTM model, and outperforms all other methods on the CIF2016 forecasting competition dataset.
Keywords: Big data forecasting | RNN | LSTM | Time series clustering | Neural networks
مقاله انگلیسی
8 Transform domain representation-driven convolutional neural networks for skin lesion segmentation
انتقال شبکه های عصبی کانولوشن نمایندگی محور دامنه برای تقسیم بندی ضایعه پوستی-2020
Automated diagnosis systems provide a huge improvement in early detection of skin cancer, and con- sequently, contribute to successful treatment. Recent research on convolutional neural network has achieved enormous success in segmentation and object detection tasks. However, these networks require large amount of data that is a big challenge in medical domain where often have insufficient data and even a pretrained model on medical images can be hardly found. Lesion segmentation as the initial step of skin cancer analysis remains a challenging issue since datasets are small and include a variety of im- ages in terms of light, color, scale, and marks which have led researchers to use extensive augmentation and preprocessing techniques or fine tuning the network with a pretrained model on irrelevant images. A segmentation model based on convolutional neural networks is proposed in this study for the tasks of skin lesion segmentation and dermoscopic feature segmentation. The network is trained from scratch and despite the small size of datasets neither excessive data augmentation nor any preprocessing to remove artifacts or enhance the images are applied. Alternatively, we investigated incorporating image represen- tations of the transform domain to the convolutional neural network and compared to a model with more convolutional layers that resulted in 6% higher Jaccard index and has shorter training time. The model improved by applying CIELAB color space and the performance of the final proposed architecture is evaluated on publicly available datasets from ISBI challenges in 2016 and 2017. The proposed model has resulted in an improvement of as much as 7% for the segmentation metrics and 17% for the fea- ture segmentation, which demonstrates the robustness of this unique hybrid framework and its future applications as well as further improvement.
Keywords: Convolutional neural network | Dermoscopic features | Melanoma | Skin lesion segmentation | Transform domain
مقاله انگلیسی
9 Exploiting potential of deep neural networks by layer-wise fine-grained parallelism
بهره برداری از پتانسیل شبکه های عصبی عمیق با موازی سازی ریز دانه ای لایه ای خرد-2020
Deep neural networks (DNNs) have become more and more important for big data analysis. They usually use data parallelism or model parallelism for extreme scale computing. However, the two approaches realize the performance improvement mainly by using coarse-grained parallelization schemes. Neither can fully exploit the potentials of the parallelism of many-core systems (such as GPUs) for neural network models. Here, a new fine − grained parallelism strategy (named FiLayer) is presented based on layer-wise parallelization. It has two components: inter-layer parallelism and intralayer parallelism. The inter-layer parallelism makes several neighboring layers be processed by using a pipeline manner in a network model. For intra-layer parallelism, the operations in one layer are separated into several parts and processed concurrently. To implement above fine-grained parallelism methods, CUDA streams are used. A mathematical analysis is presented for the influence of fragment number on performance of the inter-layer parallelism, and also an analysis for the influence of CUDA stream number on the performance of the intra-layer parallelism is given. The proposed approach is realized based on Caffe. Some representative datasets including CIFAR100 and ImageNet, are applied for experiments. The evaluation results show that it can help Caffe realize remarkable speedups, which makes much sense to big data analysis.
Keywords: Deep learning | Fine-grained parallelism | CUDA stream
مقاله انگلیسی
10 کمترین از دست دادن حاشیه برای تشخیص چهره عمیق
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 24
تشخیص چهره موفقیت بزرگی به دست آورده است که دلیل اصلی آن توسعه سریع شبکه های عصبی عمیق (DNN) در سال های اخیر است. کارکردهای مختلف ازدست دادن (اتلاف) در یک شبکه عصبی عمیق قابل استفاده است که منجر به عملکرد متفاوتی می شود. اخیراً برخی از کارکردهای تلفات پیشنهاد داده شده است. با این حال، آن ها نمی توانند مساله جهت گیری حاشیه ای را که در مجموعه داده های غیر متعادل وجود دارد حل کنند. در این مقاله حل مساله تمایل حاشیه ای را با تعیین یک حاشیه حداقلی برای تمامی زوج کلاس ها پیشنهاد می دهیم. ما تابع اتلاف جدیدی به نام حداقل اتلاف حاشیه ای (MML) پیشنهاد می دهیم که هدف آن گسترش محدوده آن هایی است که به زوج های مرکزی دسته بیش از حد نزدیک می شوند تا قابلیت متمایز کننده ویژگی های عمیق را ارتقاء دهد. تابع MML همراه با توابع Softmax Loss و Centre Loss بر فرآیند آموزش نظارت می کنند تا حاشیه های تمامی دسته ها را صرف نظر از توزیع دسته آن ها مورد نظارت قرار دهند. ما تابع MML را در پلتفورم Inception-ResNet-v1 پیاده سازی می کنیم و آزمایش های گسترده ای را بر روی هفت مجموعه داده تشخیص چهره انجام می دهیم که شامل MegaFace، FaceScrub، LFW، SLLFW، YTF، IJB-B و IJB-C است. نتایج تجربی نشان می دهد که تابع از دست دادن MML پیشنهادی منجر به حالت جدیدی در تشخیص چهره می شود و اثر منفی جهت گیری حاشیه ای را کاهش می دهد.
کلید واژه ها :یادگیری عمیق | شبکه های عصبی باز رخدادگر (CNN) | تشخیص چهره| کمترین از دست دادن حاشیه ای (MML)
مقاله ترجمه شده
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi