با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Predicting social media engagement with computer vision: An examination of food marketing on Instagram
پیشبینی تعامل رسانههای اجتماعی با بینایی رایانه: بررسی بازاریابی مواد غذایی در اینستاگرام-2022 In a crowded social media marketplace, restaurants often try to stand out by showcasing elaborate “Insta-
grammable” foods. Using an image classification machine learning algorithm (Google Vision AI) on restaurants’
Instagram posts, this study analyzes how the visual characteristics of product offerings (i.e., their food) relate to
social media engagement. Results demonstrate that food images that are more confidently evaluated by Google
Vision AI (a proxy for food typicality) are positively associated with engagement (likes and comments). A follow-
up experiment shows that exposure to typical-appearing foods elevates positive affect, suggesting they are easier
to mentally process, which drives engagement. Therefore, contrary to conventional social media practices and
food industry trends, the more typical a food appears, the more social media engagement it receives. Using
Google Vision AI to identify what product offerings receive engagement presents an accessible method for
marketers to understand their industry and inform their social media marketing strategies. keywords: بازاریابی از طریق رسانه های اجتماعی | تعامل با مصرف کننده | یادگیری ماشین | غذا | روان بودن پردازش | هوش مصنوعی گوگل ویژن | Social media marketing | Consumer engagement | Machine learning | Food | Processing fluency | Google Vision AI |
مقاله انگلیسی |
2 |
Detection of loosening angle for mark bolted joints with computer vision and geometric imaging
تشخیص زاویه شل شدن اتصالات پیچ شده با بینایی ماشین و تصویربرداری هندسی-2022 Mark bars drawn on the surfaces of bolted joints are widely used to indicate the severity of loosening. The
automatic and accurate determination of the loosening angle of mark bolted joints is a challenging issue that has
not been investigated previously. This determination will release workers from heavy workloads. This study
proposes an automated method for detecting the loosening angle of mark bolted joints by integrating computer
vision and geometric imaging theory. This novel method contained three integrated modules. The first module
used a Keypoint Regional Convolutional Neural Network (Keypoint-RCNN)-based deep learning algorithm to
detect five keypoints and locate the region of interest (RoI). The second module recognised the mark ellipse and
mark points using the transformation of the five detected keypoints and several image processing technologies
such as dilation and expansion algorithms, a skeleton algorithm, and the least square method. In the last module,
according to the geometric imaging theory, we derived a precise expression to calculate the loosening angle using
the information for the mark points and mark ellipse. In lab-scale and real-scale environments, the average
relative detection error was only 3.5%. This indicated that our method could accurately calculate the loosening
angles of marked bolted joints even when the images were captured from an arbitrary view. In the future, some
segmentation algorithms based on deep learning, distortion correction, accurate angle and length measuring
instruments, and advanced transformation methods can be applied to further improve detection accuracy. keywords: Mark bolted joint | Loosening detection | Keypoint-RCNN | Image processing | Geometric imaging |
مقاله انگلیسی |
3 |
Performance analysis of machine learning algorithm of detection and classification of brain tumor using computer vision
تحلیل عملکرد الگوریتم یادگیری ماشین تشخیص و طبقه بندی تومور مغزی با استفاده از بینایی کامپیوتر-2022 Brain tumor is one of the undesirables, uncontrolled growth of cells in all age groups. Classification of tumors
depends no its origin and degree of its aggressiveness, it also helps the physician for proper diagnosis and
treatment plan. This research demonstrates the analysis of various state-of-art techniques in Machine Learning
such as Logistic, Multilayer Perceptron, Decision Tree, Naive Bayes classifier and Support Vector Machine for
classification of tumors as Benign and Malignant and the Discreet wavelet transform for feature extraction on the
synthetic data that is available data on the internet source OASIS and ADNI. The research also reveals that the
Logistic Regression and the Multilayer Perceptron gives the highest accuracy of 90%. It mimics the human
reasoning that learns, memorizes and is capable of reasoning and performing parallel computations. In future
many more AI techniques can be trained to classify the multimodal MRI Brain scan to more than two classes of
tumors. keywords: هوش مصنوعی | ام آر آی | رگرسیون لجستیک | پرسپترون چند لایه | Artificial Intelligence | MRI | Logistic regression | OASIS | Multilayer Perceptron |
مقاله انگلیسی |
4 |
Parameterized Hamiltonian Learning With Quantum Circuit
یادگیری همیلتونی پارامتری شده با مدار کوانتومی-2022 Hamiltonian learning, as an important quantum machine learning technique, provides a significant approach for
determining an accurate quantum system. This paper establishes parameterized Hamiltonian learning (PHL) and explores its
application and implementation on quantum computers. A parameterized quantum circuit for Hamiltonian learning is first created by
decomposing unitary operators to excite the system evolution. Then, a PHL algorithm is developed to prepare a specific Hamiltonian
system by iteratively updating the gradient of the loss function about circuit parameters. Finally, the experiments are conducted on
Origin Pilot, and it demonstrates that the PHL algorithm can deal with the image segmentation problem and provide a segmentation
solution accurately. Compared with the classical Grabcut algorithm, the PHL algorithm eliminates the requirement of early manual
intervention. It provides a new possibility for solving practical application problems with quantum devices, which also assists in solving
increasingly complicated problems and supports a much wider range of application possibilities in the future.
Index Terms: Quantum machine learning | Parameterized Hamiltonian learning (PHL) | parameterized quantum circuit | Hamiltonian learning algorithm | Image segmentation |
مقاله انگلیسی |
5 |
Feature based classification of voice based biometric data through Machine learning algorithm
طبقه بندی مبتنی بر ویژگی داده های بیومتریک مبتنی بر صدا از طریق الگوریتم یادگیری ماشین-2021 In the era of big data and growing artificial intelligence, the requirement and necessity of biometric identification increase in a rapid manner. The digitalization and recent Pandemic crisis gives a boost to need to authorized identification which get fulfilled with biometric identification. Our paper focuses on same concept of checking the identification accuracy of machine learning algorithm REPTree on selected bio- metric dataset which is being deployed and evaluated on a data mining tool WEKA. Our target is to achieve more or equal to 95 percentages in order to predict the given sample data is accurately classified into our target variables values i.e. male female. The selected algorithm REPTree is a kind of decision tree classification algorithm which works on same concept as C4.5 and decision tree algorithm with speciality of generation of both kind of output i.e. discrete and continuous. The selection of algorithm gives us ben- efits with achievement of higher accuracy and selection of dataset also become easy with some required modification and pre-processing of data with some dimension reduction filters.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 1st International Con- ference on Computations in Materials and Applied Engineering – 2021. Keywords: Prediction | Biometric data | Voice samples | Male | Female | Cost complexity pruning (CCP) | Dimension reduction |
مقاله انگلیسی |
6 |
Spare parts supply chain network modeling based on a novel scale-free network and replenishment path optimization with Q learning
مدل سازی شبکه زنجیره تامین قطعات یدکی بر اساس یک شبکه جدید بدون مقیاس و بهینه سازی مسیر پر کردن با یادگیری Q-2021 The efficiency of inventory management determines the customers’ buying experience, so a supply chain network with a shorter replenishment time is needed. The supply chain network is hoped to be robust to the stock-out of some distributors in the network under random customer demands. At the same time, replenishment path optimization method with the objective of minimizing the replenishment time is required. After reviewing previous work done in the field of supply network topology, scale-free network is proven to be efficient when it was used to model supply network. In addition, multi-agent based collaborative replenishment model is smarter. But, there is rare research on multi-agent based collaborative replenishment in the supply chain modelled by scale-free network. In this study, we proposed a spare parts supply chain network model based on a novel scale- free network. In this network growth process, the connection probability function of connecting new distributor to the existing distributors in the network, is constructed considering the connection number (for an existing distributor, its connection number means the number of other distributors which have collaborative relationship with it) and inventory capacity of the existing distributors and the transit time between new distributor and existing distributors. The connection probability function is built from the standpoints of both new distributor and the existing distributors. Furthermore, different selection policies are discussed in the network growth process to improve the efficiency. Unlike other replenishment path optimization methods, Q learning takes the advantage of interacting with the environment to make a dynamic decision. So, Q learning is selected to optimize the replenishment path in supply chain network. In the experiment, network static and dynamic performance is analyzed using the indicators: degree distribution, clustering coefficient, centrality and response time. Experi- mental results showed that the replenishment time of supply chain network which are optimized by Q learning is reduced by approximately 40%. So, the shorter replenishment time of the supply chain network is verified. Keywords: Spare parts supply chain network | Scale-free network | Q learning algorithm | Random customer demands |
مقاله انگلیسی |
7 |
کارایی بیت کوین: یک رویکرد برنامه نویسی ژنتیکی قوی برای بازارهای الکترونیکی هوشمند بیت کوین
سال انتشار: 2021 - تعداد صفحات فایل pdf انگلیسی: 14 - تعداد صفحات فایل doc فارسی: 47 از زمانی که بیت کوین برای اولین بار توسط ساتوشی ناکاموتو در سال 2008 پیشنهاد شد، ارزهای دیجیتال توجه زیادی را به خود جلب کردند و پتانسیل ایفای نقش مهمی در تجارت الکترونیک را برجسته کردند. با این حال، اطلاعات نسبتا کمی در مورد ارزهای دیجیتال، رفتار قیمتی آنها، سرعت ترکیب اطلاعات جدید و کارایی بازار مربوطه آنها وجود دارد. برای گسترش ادبیات فعلی در این زمینه، ما چهار بازار هوشمند بیت کوین الکترونیکی را با انواع مختلف معامله گران با استفاده از یک فرم تطبیقی خاص از الگوریتم یادگیری مبتنی بر برنامه نویسی ژنتیکی تایپ شده قوی (STGP) توسعه می دهیم. ما تکنیک STGP را برای داده های تاریخی بیت کوین در فرکانس های یک دقیقه و پنج دقیقه اعمال می کنیم تا شکل گیری پویایی بازار بیت کوین و کارایی بازار را بررسی کنیم. از طریق انبوهی از روشهای تست قوی، متوجه میشویم که هر دو بازار بیتکوین پر از معاملهگران با فرکانس بالا (HFT) در فرکانس یک دقیقه کارآمد هستند اما در فرکانس پنج دقیقه ناکارآمد هستند. این یافته از این استدلال حمایت می کند که در فرکانس یک دقیقه سرمایه گذاران می توانند اطلاعات جدید را به شیوه ای سریع و منطقی ترکیب کنند و از نویز مرتبط با فرکانس پنج دقیقه رنج نبرند. ما همچنین با نشان دادن اینکه معاملهگران با هوش صفر نمیتوانند به کارایی بازار برسند، به ادبیات تجارت الکترونیک کمک میکنیم، بنابراین شواهدی علیه فرضیه هی ارائه میکنیم. یکی از پیامدهای عملی این مطالعه این است که ما نشان میدهیم که متخصصان تجارت الکترونیک میتوانند از ابزارهای هوش مصنوعی مانند STGP برای انجام پروفایل بازار مبتنی بر رفتار استفاده کنند.
کلمات کلیدی: هوش مصنوعی | بازارهای الکترونیک هوشمند | تجارت بیت کوین | ارزهای دیجیتال | محاسبات تکاملی | کارایی بازار |
مقاله ترجمه شده |
8 |
A pointer network based deep learning algorithm for unconstrained binary quadratic programming problem
یک شبکه اشاره گر مبتنی بر الگوریتم یادگیری عمیق برای مسئله برنامه نویسی درجه دوم باینری نامحدود-2020 Combinatorial optimization problems have been widely used in various fields. And many types of com- binatorial optimization problems can be generalized into the model of unconstrained binary quadratic programming (UBQP). Therefore, designing an effective and efficient algorithm for UBQP problems will also contribute to solving other combinatorial optimization problems. Pointer network is an end-to-end sequential decision structure and combines with deep learning technology. With the utilization of the structural characteristics of combinatorial optimization problems and the ability to extract the rule be- hind the data by deep learning, pointer network has been successfully applied to solve several classical combinatorial optimization problems. In this paper, a pointer network based algorithm is designed to solve UBQP problems. The network model is trained by supervised learning (SL) and deep reinforcement learning (DRL) respectively. Trained pointer network models are evaluated by self-generated benchmark dataset and ORLIB dataset respectively. Experimental results show that pointer network model trained by SL has strong learning ability to specific distributed dataset. Pointer network model trained by DRL can learn more general distribution data characteristics. In other words, it can quickly solve problems with great generalization ability. As a result, the framework proposed in this paper for UBQP has great potential to solve large scale combinatorial optimization problems. Keywords: UBQP | Pointer network | Supervised learning | Deep reinforcement learning |
مقاله انگلیسی |
9 |
Deep reinforcement learning algorithm for dynamic pricing of express lanes with multiple access locations
الگوریتم یادگیری تقویتی عمیق برای قیمت گذاری پویا خطوط اکسپرس با مکان های دسترسی متعدد-2020 This article develops a deep reinforcement learning (Deep-RL) framework for dynamic pricing on
managed lanes with multiple access locations and heterogeneity in travelers’ value of time,
origin, and destination. This framework relaxes assumptions in the literature by considering
multiple origins and destinations, multiple access locations to the managed lane, en route diversion
of travelers, partial observability of the sensor readings, and stochastic demand and
observations. The problem is formulated as a partially observable Markov decision process
(POMDP) and policy gradient methods are used to determine tolls as a function of real-time
observations. Tolls are modeled as continuous and stochastic variables and are determined using
a feedforward neural network. The method is compared against a feedback control method used
for dynamic pricing. We show that Deep-RL is effective in learning toll policies for maximizing
revenue, minimizing total system travel time, and other joint weighted objectives, when tested on
real-world transportation networks. The Deep-RL toll policies outperform the feedback control
heuristic for the revenue maximization objective by generating revenues up to 8.5% higher than
the heuristic and for the objective minimizing total system travel time (TSTT) by generating TSTT
up to 8.4% lower than the heuristic. We also propose reward shaping methods for the POMDP to
overcome the undesired behavior of toll policies, like the jam-and-harvest behavior of revenuemaximizing
policies. Additionally, we test transferability of the algorithm trained on one set of
inputs for new input distributions and offer recommendations on real-time implementations of
Deep-RL algorithms. Keywords: Managed lanes | Express lanes | High occupancy/toll (HOT) lanes | Dynamic pricing | Deep reinforcement learning | Traffic control | Feedback control heuristic |
مقاله انگلیسی |
10 |
A novel axle temperature forecasting method based on decomposition, reinforcement learning optimization and neural network
یک روش جدید پیش بینی دمای محور بر اساس تجزیه ، بهینه سازی یادگیری تقویتی و شبکه عصبی-2020 Axle temperature forecasting technology is important for monitoring the status of the train bogie and preventing
the hot axle and other dangerous accidents. In order to achieve high-precision forecasting of axle temperature, a
hybrid axle temperature time series forecasting model based on decomposition preprocessing method, parameter
optimization method, and the Back Propagation (BP) neural network is proposed in this study. The modeling
process consists of three phases. In stage I, the empirical wavelet transform (EWT) method is used to preprocess
the original axle temperature series by decomposing them into several subseries. In stage II, the Q-learning
algorithm is used to optimize the initial weights and thresholds of the BP neural network. In stage III, the QBPNN
network is used to build the forecasting model and complete predicting all subseries. And the final
forecasting results are generated by combining all prediction results of subseries. By comparing all results over
three case predictions, it can be concluded that: (a) the proposed Q-learning based parameter optimization
method is effective in improving the accuracy of the BP neural network and works better than the traditional
population-based optimization methods; (b) the proposed hybrid axle temperature forecasting model can get
accurate prediction results in all cases and provides the best accuracy among eight general models. Keywords: Axle temperature forecasting | Hybrid model | Empirical wavelet transform | Q-learning algorithm | Parameter optimization | Q-BPNN network |
مقاله انگلیسی |