با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
Multistability of switched neural networks with sigmoidal activation functions under state-dependent switching
قابلیت چند منظوره تغییر شبکه های عصبی با توابع فعال سازی سیگموئید تحت تعویض وابسته به حالت-2020
This paper presents theoretical results on the multistability of switched neural networks with commonly used sigmoidal activation functions under state-dependent switching. The multistability analysis with such an activation function is difficult because state–space partition is not as straightforward as that with piecewise-linear activations. Sufficient conditions are derived for ascertaining the existence and stability of multiple equilibria. It is shown that the number of stable equilibria of an n-neuron switched neural networks is up to 3n under given conditions. In contrast to existing multistability results with piecewise-linear activation functions, the results herein are also applicable to the equilibria at switching points. Four examples are discussed to substantiate the theoretical results.
Keywords: Multistability | Switched neural network | State-dependent | Sigmoidal activation function
Extreme learning machine for a new hybrid morphological/linear perceptron
دستگاه یادگیری شدید برای مورفولوژی جدید ترکیبی / پرسپترون خطی-2020
Morphological neural networks (MNNs) can be characterized as a class of artificial neural networks that perform an operation of mathematical morphology at every node, possibly followed by the application of an activation function. Morphological perceptrons (MPs) and (gray-scale) morphological associative memories are among the most widely known MNN models. Since their neuronal aggregation functions are not differentiable, classical methods of non-linear optimization can in principle not be directly applied in order to train these networks. The same observation holds true for hybrid morphological/ linear perceptrons and other related models. Circumventing these problems of non-differentiability, this paper introduces an extreme learning machine approach for training a hybrid morphological/linear perceptron, whose morphological components were drawn from previous MP models. We apply the resulting model to a number of well-known classification problems from the literature and compare the performance of our model with the ones of several related models, including some recent MNNs and hybrid morphological/linear neural networks.
Keywords: Mathematical morphology | Lattice computing | Morphological neural networks | Hybrid morphological/linear perceptron | Extreme learning machine | Classification
Matrix-valued twin-multistate Hopfield neural networks
شبکه های عصبی Hopfield دوقلو چند مرحله ای با ارزش-2020
A complex-valued Hopfield neural network (CHNN) has been widely used for the storage of image data. The CHNN has been extended using hypercomplex numbers. A couple of hypercomplex-valued Hopfield neural networks employ a twin-multistate activation function to reduce the numbers of weight param- eters. In this work, we propose a matrix-valued twin-multistate Hopfield neural network (MTMHNN), whose neuron states and weights are 2 ×2 matrices. Computer simulations show that the MTMHNN has better noise tolerance than the hypercomplex-valued twin-multistate Hopfield neural networks.
Keywords: Complex-valued neural networks | Hopfield neural networks | Twin-multistate activation function
Deep learning for continuous manufacturing of pharmaceutical solid dosage form
یادگیری عمیق برای تولید مداوم فرم دوز جامد دارویی-2020
Continuous Manufacturing (CM) of pharmaceutical drug products is a new approach within the pharmaceutical industry. In the presented paper, a GMP continuous wet granulation line for production of solid dosage forms was investigated. The line was composed of the subsequent continuous unit: operations feeding – twin-screw wet-granulation – fluid-bed drying – sieving and tableting. The formulation of a commercial entity was selected for this study. Several critical process parameters were evaluated in order to probe the process and to characterize the impact on quality attributes. Seven critical process parameters have been selected after a risk analysis: API and excipient mass flows of the two feeders, liquid feed rate and rotation speed of the extruder and rotation speed, temperature and airflow of the dryer. Eight quality attributes were controlled in real time by Process Analytical Technologies (PAT): API content after blender, after dryer, in tablet press feed frame and of tablet, LOD after dryer and PSD after dryer (three PSD parameters: x10 x50 x90). The process parameter values were changed during production in order to detect the impact on the quality of the final product. The deep learning techniques have been used in order to predict the quality attribute (output) with the process parameters (input). The use of deep learning reduces the noise and simplify the data interpretation for a better process understanding. After optimization, three hidden layers neural network were selected with 6 hidden neurons. The activation function ReLU (Rectified Linear Unit) and the ADAM optimizer were used with 2500 epochs (number of learning cycle). API contents, PSD values and LOD values were estimated with an error of calibration lower than 10%. The level of error allow an adequate process monitoring by DNN and we have proven that the main critical process parameters can be identified at a higher levelof process understanding. The synergy between PAT and process data science creates a superior monitoring framework of the continuous manufacturing line and increase the knowledge of this innovative production line and the products that it makes.
Keywords: Continuous manufacturing | Solid dosage form | Process monitoring | Process analytical technology | Deep learning | Process data science | Process data analytics
Comparing of deep neural networks and extreme learning machines based on growing and pruning approach
مقایسه شبکه های عصبی عمیق و دستگاههای یادگیری افراطی بر اساس رویکرد در حال رشد و هرس-2020
Recently, the studies based on Deep Neural Networks and Extreme Learning Machines have become prominent. The models of parameters designed in these studies have been chosen randomly and the models have been designed in this direction. The main focus of this study is to determine the ideal pa- rameters i.e. optimum hidden layer number, optimum hidden neuron number and activation function for Deep Neural Networks and Extreme Learning Machines architectures based on growing and pruning ap- proach and to compare the performances of the models designed. The performances of the models are evaluated on two datasets; Parkinson and Self-Care Activities Dataset. Multi experiments have verified that the Deep Neural Networks architectures present a good prediction performance and this architec- ture outperforms the Extreme Learning Machines.
Keywords: Deep Neural Networks | Extreme Learning Machines | Growing and pruning | Parkinson | Self-care activities
A novel neural network based image descriptor for texture classification
توصیف کننده تصویر مبتنی بر شبکه عصبی برای طبقه بندی بافت-2019
Nowadays, image processing and artificial intelligence have become popular science areas. The one of the major problems of the image processing is texture classification. Therefore, many methods have been presented about texture classification. In this article, a new textural feature extraction method is proposed. In this method, the feed forward neural networks are utilized as a feature extractor. The main purpose of the proposed method is to show feature extraction capability of the feed forward neural network. This descriptor consists of 3 x 3 overlapping blocks division, creating feature extraction network by using row and column pixels of the block, calculating feature value, normalization and histogram extraction. Firstly, image is divided into 3 x 3 size of overlapping blocks and pixels of each block are selected to create feed forward networks. To calculate the weights, neighbor pixel values and the signum function are used together. Tangent hyperbolic function is utilized as activation function in these networks. PCA (Principle Component Analysis) reduces feature dimensionality and LDA (Linear Discriminant Analysis) is chosen as classifier. In order to obtain the experimental results, the commonly used texture datasets were used with variable parameters. These datasets are UIUC, Outex and USPTex. The classification accuracies were calculated as 90.82%, 89.62% and 93.83% for these datasets respectively. The results were compared with the related 16 methods and the proposed method achieved the best performance among them. The space complexity of this method also calculated and the cost of the proposed method was given in the experiments. The computational costs result of this method demonstrates that the proposed neural network based image descriptor method has low complexity. The results clearly illustrated that the proposed textural image descriptor extracts distinctive features with short execution time, has simple mathematical background, is good discriminator and outperforms.
Keywords: Feed forward textural feature extraction | Texture analysis | Texture recognition | Pattern recognition | Classification
Road surface condition classification using deep learning
طبقه بندی وضعیت سطح جاده با استفاده از یادگیری عمیق-2019
Traditional image recognition technology currently cannot achieve the fast real-time high-accuracy performance necessary for road recognition in intelligent driving. Deep learning models have been recently emerging as promising tools to achieve this performance. The recognition performance of such models can be boosted using appropriate selection of the activation functions. This paper proposes a deep learning approach for the classification of road surface conditions, and constructs a new activation function based on the rectified linear unit Rectified Linear Units (ReLu) activation function. The experimental results show a classification accuracy of 94.89% on the road state database. Experiments on public datasets demonstrate that the proposed convolutional neural network model with the improved activation function has better generalization and excellent classification performance.
Keywords: Deep learning | Road condition | Activation function | Image recognition | Intelligent driving
Extreme minimal learning machine: Ridge regression with distance-based basis
یادگیری ماشین حداقل افراطی: رگرسیون ریج با اساس مبتنی بر فاصله-2019
The extreme learning machine (ELM) and the minimal learning machine (MLM) are nonlinear and scal- able machine learning techniques with a randomly generated basis. Both techniques start with a step in which a matrix of weights for the linear combination of the basis is recovered. In the MLM, the feature mapping in this step corresponds to distance calculations between the training data and a set of refer- ence points, whereas in the ELM, a transformation using a radial or sigmoidal activation function is com- monly used. Computation of the model output, for prediction or classification purposes, is straightforward with the ELM after the first step. In the original MLM, one needs to solve an additional multilateration problem for the estimation of the distance-regression based output. A natural combination of these two techniques is proposed and experimented here: to use the distance-based basis characteristic in the MLM in the learning framework of the regularized ELM. In other words, we conduct ridge regression using a distance-based basis. The experimental results characterize the basic features of the proposed technique and surprisingly, indicate that overlearning with the distance-based basis is in practice avoided in clas- sification problems. This makes the model selection for the proposed method trivial, at the expense of computational costs.
Keywords: Randomized learning machines | Extreme learning machine | Minimal learning machine | Extreme minimal learning machine
Enhancing batch normalized convolutional networks using displaced rectifier linear units: A systematic comparative study
افزایش شبکه های نرم افزاری بصورت جمع شده با استفاده از واحدهای خطی یکسو کننده جابجا شده: یک مطالعه مقایسه ای سیستماتیک-2019
A substantial number of expert and intelligent systems rely on deep learning methods to solve problems in areas such as economics, physics, and medicine. Improving the accuracy of the activation functions used by such methods can directly and positively impact the overall performance and quality of the mentioned systems at no cost whatsoever. In this sense, enhancing the design of such theoretical fun- damental blocks is of great significance as it immediately impacts a broad range of current and future real-world deep learning based applications. Therefore, in this paper, we turn our attention to the inter- working between the activation functions and the batch normalization, which is practically a mandatory technique to train deep networks currently. We propose the activation function Displaced Rectifier Linear Unit (DReLU) by conjecturing that extending the identity function of ReLU to the third quadrant enhances compatibility with batch normalization. Moreover, we used statistical tests to compare the impact of us- ing distinct activation functions (ReLU, LReLU, PReLU, ELU, and DReLU) on the learning speed and test accuracy performance of standardized VGG and Residual Networks state-of-the-art models. These Convo- lutional Neural Networks were trained on CIFAR-100 and CIFAR-10, the most commonly used deep learn- ing computer vision datasets. The results showed DReLU speeded up learning in all models and datasets. Besides, statistical significant performance assessments ( p < 0.05) showed DReLU enhanced the test accu- racy presented by ReLU in all scenarios. Furthermore, DReLU showed better test accuracy than any other tested activation function in all experiments with one exception, in which case it presented the second best performance. Therefore, this work demonstrates that it is possible to increase performance replacing ReLU by an enhanced activation function.
Keywords: DReLU | Activation function | Batch normalization | Comparative study | Convolutional Neural Networks | Deep learning
پیش بینی هزینه های اضافه سایت کارگاهی با استفاده از مدل مبتنی بر شبکه عصبی مصنوعی
سال انتشار: 2018 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 26
هزینه های اضافه، به ویژه هزینه های اضافه سایت، جزء قابل توجهی از بودجه یک پیمانکار را در یک پروژه ساخت تشکیل می دهند. تخمین هزینه های اضافه سایت برمبنای دیدگاه سنتی یا دقیق است اما زمان¬بَر است (درحالت استفاده از روشهای تحلیل جزئیاتی) یا اینکه سریع است اما غیردقیق است (درحالت استفاده از روشهای شاخص). هدف تحقیق ارائه شده در این مقاله تولید یک مدل جایگزین بود که اجازه تخمین سریع و قابل اطمینان هزینه های اضافه سایت را بدهد. این مقاله نتایج کار نویسنده ها روی تولید یک مدل رگراسیونی، برمبنای شبکه های عصبی مصنوعی، را ارائه می کند که پیش بینی شاخص هزینه اضافه سایت را امکانپذیر می کند که در تقارن با سایر داده های هزینه ای استفاده می شود و اجازه تخمین هزینه های اضافه سایت را می دهد. برای تولید یک مدل، از یک پایگاه داده ای شامل 143 مورد از پروژه های ساخت تکمیل شده استفاده شد. این مدلسازی با یک تعدادی شبکه های عصبی مصنوعی از نوع چندلایه ای پرسپترون سروکار داشت که هرکدام دارای ساختارها، کارکردهای فعالسازی و الگوریتمهای آزمایشی مختلفی بودند. شبکه عصبی انتخاب شده به عنوان هسته مدل تولید شده اجازه پیش بینی شاخص هزینه ها را می دهد و به تخمین هزینه های اضافه سایت در مراحل اولیه یک پروژه ساخت با دقت رضایت بخشی کمک می کند.
کلیدواژه ها: هزینه اضافه سایت | شبکه های عصبی مصنوعی | مدیریت هزینه ساخت
|مقاله ترجمه شده|