دانلود و نمایش مقالات مرتبط با شبکه های عصبی::صفحه 1
سیزه به در
نتیجه جستجو - شبکه های عصبی

تعداد مقالات یافته شده: 148
ردیف عنوان نوع
1 Towards a real-time processing framework based on improved distributed recurrent neural network variants with fastText for social big data analytics
به سمت یک چارچوب پردازش در زمان واقعی بر اساس بهبود انواع شبکه عصبی مکرر توزیع شده با fastText برای تجزیه و تحلیل داده های بزرگ اجتماعی-2020
Big data generated by social media stands for a valuable source of information, which offers an excellent opportunity to mine valuable insights. Particularly, User-generated contents such as reviews, recommendations, and users’ behavior data are useful for supporting several marketing activities of many companies. Knowing what users are saying about the products they bought or the services they used through reviews in social media represents a key factor for making decisions. Sentiment analysis is one of the fundamental tasks in Natural Language Processing. Although deep learning for sentiment analysis has achieved great success and allowed several firms to analyze and extract relevant information from their textual data, but as the volume of data grows, a model that runs in a traditional environment cannot be effective, which implies the importance of efficient distributed deep learning models for social Big Data analytics. Besides, it is known that social media analysis is a complex process, which involves a set of complex tasks. Therefore, it is important to address the challenges and issues of social big data analytics and enhance the performance of deep learning techniques in terms of classification accuracy to obtain better decisions. In this paper, we propose an approach for sentiment analysis, which is devoted to adopting fastText with Recurrent neural network variants to represent textual data efficiently. Then, it employs the new representations to perform the classification task. Its main objective is to enhance the performance of well-known Recurrent Neural Network (RNN) variants in terms of classification accuracy and handle large scale data. In addition, we propose a distributed intelligent system for real-time social big data analytics. It is designed to ingest, store, process, index, and visualize the huge amount of information in real-time. The proposed system adopts distributed machine learning with our proposed method for enhancing decision-making processes. Extensive experiments conducted on two benchmark data sets demonstrate that our proposal for sentiment analysis outperforms well-known distributed recurrent neural network variants (i.e., Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (BiLSTM), and Gated Recurrent Unit (GRU)). Specifically, we tested the efficiency of our approach using the three different deep learning models. The results show that our proposed approach is able to enhance the performance of the three models. The current work can provide several benefits for researchers and practitioners who want to collect, handle, analyze and visualize several sources of information in real-time. Also, it can contribute to a better understanding of public opinion and user behaviors using our proposed system with the improved variants of the most powerful distributed deep learning and machine learning algorithms. Furthermore, it is able to increase the classification accuracy of several existing works based on RNN models for sentiment analysis.
Keywords: Big data | FastText | Recurrent neural networks | LSTM | BiLSTM | GRU | Natural language processing | Sentiment analysis | Social big data analytics
مقاله انگلیسی
2 Prediction of the ground temperature with ANN, LS-SVM and fuzzy LS-SVM for GSHP application
پیش بینی دمای زمین با شبکه های عصبی، LS-SVM و LS-SVM فازی برای استفاده GSHP-2020
Ground source heat pump (GSHP) system has received more and more attentions for its energy-conserving and environmental-friendly properties. Acquisition of the undisturbed ground temperature is the prerequisite for designing of GSHP system. Measurement by burying temperature sensors underground is the conventional means for obtaining the ground temperature data. However, this way is usually time consuming and high investment, and also easily encounter with certain technical difficulties. The rapid development of intelligent computation algorithm provides solutions for many realistic difficult problems. Basing on a great number of the measured data of the ground temperature from two boreholes with 100m depth located in Chongqing, ground temperature prediction models basing on artificial neural network (ANN) and support vector machine based on least square (LS-SVM) are established, respectively. And then, two kinds of validation works, i.e., holdout validation and k-fold validation are conducted toward the two models, respectively. Furthermore, a new method that correlating fuzzy theory with LS-SVM is proposed to solve the big computation burden problem encountered by LS-SVM model. By comparing with the above two models, it is concluded that the newly proposed model can not only improve the calculation speed obviously but also be able to promote the prediction accuracy, especially superior to the single LS-SVM model.
Keywords: Ground temperature | Fuzzy | Support vector machine | Ground source heat pump
مقاله انگلیسی
3 Exploiting the stimuli encoding scheme of evolving Spiking Neural Networks for stream learning
بهره برداری از برنامه رمزگذاری محرک های در حال تکامل شبکه های عصبی Spiking برای یادگیری جریان-2020
Stream data processing has lately gained momentum with the arrival of new Big Data scenarios and applications dealing with continuously produced information flows. Unfortunately, traditional machine learning algorithms are not prepared to tackle the specific challenges imposed by data stream processing, such as the need for learning incrementally, limited memory and processing time requirements, and adaptation to non-stationary data, among others. To face these paradigms, Spiking Neural Networks have emerged as one of the most promising stream learning techniques, with variants such as Evolving Spiking Neural Networks capable of efficiently addressing many of these challenges. Interestingly, these networks resort to a particular population encoding scheme – Gaussian Receptive Fields – to transform the incoming stimuli into temporal spikes. The study presented in this manuscript sheds light on the predictive potential of this encoding scheme, focusing on how it can be applied as a computationally lightweight, model-agnostic preprocessing step for data stream learning. We provide informed intuition to unveil under which circumstances the aforementioned population encoding method yields effective prediction gains in data stream classification with respect to the case where no preprocessing is performed. Results obtained for a variety of stream learning models and both synthetic and real stream datasets are discussed to empirically buttress the capability of Gaussian Receptive Fields to boost the predictive performance of stream learning methods, spanning further research towards extrapolating our findings to other machine learning problems.
Keywords: Stream learning | Gaussian receptive fields | Population encoding | Evolving Spiking Neural Networks
مقاله انگلیسی
4 Neural network aided development of a semi-empirical interatomic potential for titanium
شبکه عصبی به توسعه پتانسیل متقابل نیمه تجربی تیتانیوم کمک کرده است-2020
Artificial neural networks, utilizing machine learning techniques to uncover subtle and complex patterns in big data problems, are able to condense large amounts of computationally expensive density functional theory and ab initio results into classical force field potentials. However, in order to produce a computationally efficient network, with minimal network architecture, a structural fingerprint whose components are highly correlated to the per atom energy is necessary. In this paper, we demonstrate the effectiveness a structural fingerprint motivated by the highly successful MEAM formalism by creating an artificial neural network containing a single hidden layer of 20 nodes which provides a semi-empirical force field potential for elemental titanium. This potential is suitable for dynamic calculations of α-, β-, and ω-titanium at a variety of temperatures. This potential is able to achieve a number of results in agreement with DFT calculations which surpass classical potential formalisms with comparable computational performance.
Keywords: Machine learning | Neural networks | Titanium
مقاله انگلیسی
5 Forecasting across time series databases using recurrent neural networks on groups of similar series: A clustering approach
پیش بینی پیش بینی پایگاه داده های سری زمانی با استفاده از شبکه های عصبی مکرر در گروه های مشابه سری: یک روش خوشه بندی-2020
With the advent of Big Data, nowadays in many applications databases containing large quantities of sim- ilar time series are available. Forecasting time series in these domains with traditional univariate fore- casting procedures leaves great potentials for producing accurate forecasts untapped. Recurrent neural networks (RNNs), and in particular Long Short Term Memory (LSTM) networks, have proven recently that they are able to outperform state-of-the-art univariate time series forecasting methods in this context, when trained across all available time series. However, if the time series database is heterogeneous, ac- curacy may degenerate, so that on the way towards fully automatic forecasting methods in this space, a notion of similarity between the time series needs to be built into the methods. To this end, we present a prediction model that can be used with different types of RNN models on subgroups of similar time series, which are identified by time series clustering techniques. We assess our proposed methodology using LSTM networks, a widely popular RNN variant, together with various clustering algorithms, such as kMeans, DBScan, Partition Around Medoids (PAM), and Snob. Our method achieves competitive results on benchmarking datasets under competition evaluation procedures. In particular, in terms of mean sMAPE accuracy it consistently outperforms the baseline LSTM model, and outperforms all other methods on the CIF2016 forecasting competition dataset.
Keywords: Big data forecasting | RNN | LSTM | Time series clustering | Neural networks
مقاله انگلیسی
6 Transform domain representation-driven convolutional neural networks for skin lesion segmentation
انتقال شبکه های عصبی کانولوشن نمایندگی محور دامنه برای تقسیم بندی ضایعه پوستی-2020
Automated diagnosis systems provide a huge improvement in early detection of skin cancer, and con- sequently, contribute to successful treatment. Recent research on convolutional neural network has achieved enormous success in segmentation and object detection tasks. However, these networks require large amount of data that is a big challenge in medical domain where often have insufficient data and even a pretrained model on medical images can be hardly found. Lesion segmentation as the initial step of skin cancer analysis remains a challenging issue since datasets are small and include a variety of im- ages in terms of light, color, scale, and marks which have led researchers to use extensive augmentation and preprocessing techniques or fine tuning the network with a pretrained model on irrelevant images. A segmentation model based on convolutional neural networks is proposed in this study for the tasks of skin lesion segmentation and dermoscopic feature segmentation. The network is trained from scratch and despite the small size of datasets neither excessive data augmentation nor any preprocessing to remove artifacts or enhance the images are applied. Alternatively, we investigated incorporating image represen- tations of the transform domain to the convolutional neural network and compared to a model with more convolutional layers that resulted in 6% higher Jaccard index and has shorter training time. The model improved by applying CIELAB color space and the performance of the final proposed architecture is evaluated on publicly available datasets from ISBI challenges in 2016 and 2017. The proposed model has resulted in an improvement of as much as 7% for the segmentation metrics and 17% for the fea- ture segmentation, which demonstrates the robustness of this unique hybrid framework and its future applications as well as further improvement.
Keywords: Convolutional neural network | Dermoscopic features | Melanoma | Skin lesion segmentation | Transform domain
مقاله انگلیسی
7 Detection of flood disaster system based on IoT, big data and convolutional deep neural network
تشخیص سیستم بحرانی سیل بر اساس اینترنت اشیا، داده های بزرگ و شبکه عصبی عمیق پیچشی-2020
Natural disasters could be defined as a blend of natural risks and vulnerabilities. Each year, natural as well as human-instigated disasters, bring about infrastructural damages, distresses, revenue losses, injuries in addition to huge death roll. Researchers around the globe are trying to find a unique solution to gather, store and analyse Big Data (BD) in order to predict results related to flood based prediction system. This paper has proposed the ideas and methods for the detection of flood disaster based on IoT, BD, and convolutional deep neural network (CDNN) to overcome such difficulties. First, the input data is taken from the flood BD. Next, the repeated data are reduced by using HDFS map-reduce (). After removal of repeated data, the data are pre-processed using missing value imputation and normalization function. Then, centred on the pre-processed data, the rule is generated by using a combination of attributes method. At the last stage, the generated rules are provided as the input to the CDNN classifier which classifies them as a) chances for the occurrence of flood and b) no chances for the occurrence of a flood. The outcomes obtained from the proposed CDNN method is compared parameters like Sensitivity, Specificity, Accuracy, Precision, Recall and F-score. Moreover, when the outcomes is compared other existing algorithms like Artificial Neural Network (ANN) & Deep Learning Neural Network (DNN), the proposed system gives is very accurate result than other methods.
Keywords: Hadoop distributed file system (HDFS) | Convolutional deep neural network (CDNN) | Normalization | Rule generation | Missing value imputation
مقاله انگلیسی
8 Exploiting potential of deep neural networks by layer-wise fine-grained parallelism
بهره برداری از پتانسیل شبکه های عصبی عمیق با موازی سازی ریز دانه ای لایه ای خرد-2020
Deep neural networks (DNNs) have become more and more important for big data analysis. They usually use data parallelism or model parallelism for extreme scale computing. However, the two approaches realize the performance improvement mainly by using coarse-grained parallelization schemes. Neither can fully exploit the potentials of the parallelism of many-core systems (such as GPUs) for neural network models. Here, a new fine − grained parallelism strategy (named FiLayer) is presented based on layer-wise parallelization. It has two components: inter-layer parallelism and intralayer parallelism. The inter-layer parallelism makes several neighboring layers be processed by using a pipeline manner in a network model. For intra-layer parallelism, the operations in one layer are separated into several parts and processed concurrently. To implement above fine-grained parallelism methods, CUDA streams are used. A mathematical analysis is presented for the influence of fragment number on performance of the inter-layer parallelism, and also an analysis for the influence of CUDA stream number on the performance of the intra-layer parallelism is given. The proposed approach is realized based on Caffe. Some representative datasets including CIFAR100 and ImageNet, are applied for experiments. The evaluation results show that it can help Caffe realize remarkable speedups, which makes much sense to big data analysis.
Keywords: Deep learning | Fine-grained parallelism | CUDA stream
مقاله انگلیسی
9 کمترین از دست دادن حاشیه برای تشخیص چهره عمیق
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 24
تشخیص چهره موفقیت بزرگی به دست آورده است که دلیل اصلی آن توسعه سریع شبکه های عصبی عمیق (DNN) در سال های اخیر است. کارکردهای مختلف ازدست دادن (اتلاف) در یک شبکه عصبی عمیق قابل استفاده است که منجر به عملکرد متفاوتی می شود. اخیراً برخی از کارکردهای تلفات پیشنهاد داده شده است. با این حال، آن ها نمی توانند مساله جهت گیری حاشیه ای را که در مجموعه داده های غیر متعادل وجود دارد حل کنند. در این مقاله حل مساله تمایل حاشیه ای را با تعیین یک حاشیه حداقلی برای تمامی زوج کلاس ها پیشنهاد می دهیم. ما تابع اتلاف جدیدی به نام حداقل اتلاف حاشیه ای (MML) پیشنهاد می دهیم که هدف آن گسترش محدوده آن هایی است که به زوج های مرکزی دسته بیش از حد نزدیک می شوند تا قابلیت متمایز کننده ویژگی های عمیق را ارتقاء دهد. تابع MML همراه با توابع Softmax Loss و Centre Loss بر فرآیند آموزش نظارت می کنند تا حاشیه های تمامی دسته ها را صرف نظر از توزیع دسته آن ها مورد نظارت قرار دهند. ما تابع MML را در پلتفورم Inception-ResNet-v1 پیاده سازی می کنیم و آزمایش های گسترده ای را بر روی هفت مجموعه داده تشخیص چهره انجام می دهیم که شامل MegaFace، FaceScrub، LFW، SLLFW، YTF، IJB-B و IJB-C است. نتایج تجربی نشان می دهد که تابع از دست دادن MML پیشنهادی منجر به حالت جدیدی در تشخیص چهره می شود و اثر منفی جهت گیری حاشیه ای را کاهش می دهد.
کلید واژه ها :یادگیری عمیق | شبکه های عصبی باز رخدادگر (CNN) | تشخیص چهره| کمترین از دست دادن حاشیه ای (MML)
مقاله ترجمه شده
10 روش یادگیری متخاصم عمیق و چند مرحله ای ، برای باز شناسی شخص مبتنی بر ویدئو
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 42
بازشناسی شخص (re-ID) بر مبنای ویدئو را میتوان به عنوان فرآیند تطبیق تصویر یک فرد از طریق دیدهای مختلف دوربین که به وسیله ی تصاویر ویدئویی ناهم راستا گرفته شده است، در نظر گرفت. روش هایی که برای اینکار وجود دارند، از سیگنال های نظارتی برای بهینه سازی فضای پیش روی دوربین استفاده نموده که تحت این شرایط، فاصله ی بین ویدئوها بیشینه سازی/کمینه سازی میشود. البته این کار باعث شده تا برچسب گذاری افراد در سطح دید های ویدئو بسیار زیاد شده و باعث شده تا نتوان آنها را به خوبی بر روی دوربین های شبکه بندی شده ی بزرگ مقیاس بندی کرد. همچنین خاطر نشان شده است که یادگیری نمایش های مختلف ویدئویی و آنهم به وسیله ی عدم تغییر دید دوربین را نمیتوان انجام داد چرا که ویژگی های تصویر، هر کدام دارای توزیع های مختلف مختص به خود میباشند. بنابراین تطبیق ویدئوها برای باز شناسی افراد، نیاز به مدل هایی انعطاف پذیر برای بدست آوردن پویایی های موجود در مشاهدات ویدئویی و یادگیری دیدهای ثابت از طریق دسترسی به نمونه های آموزشی برچسب دار و محدود دارد. در این مقاله قصد داریم یک روش مبتنی بر یادگیری عمیق چند مرحله ای را برای باز شناسی یک فرد بر مبنای ویدئو ارائه دهیم و بتوانیم به یادگیری دیدهای قابل قیاسی از این فرد که متمایز هستند بپردازیم. روش پیشنهادی را بر روی شبکه های عصبی باز رخداد گر متغیر (VRNN) توسعه داده ایم و آنرا به منظور ایجاد متغیر های پنهان با وابستگی های موقت که بسیار متمایز بوده ولی در تطبیق تصاویر فرد از نظر دید ثابت میباشد، مورد یادگیری قرار داده ایم. آزمایش های وسیعی را بر روی سه مجموعه ی داده ای بنچ مارک انجام داده ایم و به صورت تجربی به اثبات قابلیت روش پیشنهادی مان در ایجاد ویژگی های موقتی و با یک دید ثابت و کارائی بالایی که به وسیله ی آن بدست آمده است خواهیم پرداخت.
کلمات کلیدی: باز شناسی شخص مبتنی بر ویدئو | شبکه های عصبی باز رخدادگر متغیر | یادگیری متخاصم
مقاله ترجمه شده
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi