دانلود و نمایش مقالات مرتبط با الگوریتم های یادگیری ماشین::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - الگوریتم های یادگیری ماشین

تعداد مقالات یافته شده: 26
ردیف عنوان نوع
1 Computer vision for anatomical analysis of equipment in civil infrastructure projects: Theorizing the development of regression-based deep neural networks
چشم انداز کامپیوتری برای تجزیه و تحلیل آناتومیکی تجهیزات در پروژه های زیرساختی عمرانی: نظریه پردازی توسعه شبکه های عصبی عمیق مبتنی بر رگرسیون-2022
There is high demand for heavy equipment in civil infrastructure projects and their performance is a determinant of the successful delivery of site operations. Although manufacturers provide equipment performance hand- books, additional monitoring mechanisms are required to depart from measuring performance on the sole basis of unit cost for moved materials. Vision-based tracking and pose estimation can facilitate site performance monitoring. This research develops several regression-based deep neural networks (DNNs) to monitor equipment with the aim of ensuring safety, productivity, sustainability and quality of equipment operations. Annotated image libraries are used to train and test several backbone architectures. Experimental results reveal the pre- cision of DNNs with depthwise separable convolutions and computational efficiency of DNNs with channel shuffle. This research provides scientific utility by developing a method for equipment pose estimation with the ability to detect anatomical angles and critical keypoints. The practical utility of this study is the provision of potentials to influence current practice of articulated machinery monitoring in projects.
keywords: هوش مصنوعی (AI) | سیستم های فیزیکی سایبری | معیارهای ارزیابی خطا | طراحی و آزمایش تجربی | تخمین ژست کامل بدن | صنعت و ساخت 4.0 | الگوریتم های یادگیری ماشین | معماری های ستون فقرات شبکه | Artificial intelligence (AI) | Cyber physical systems | Error evaluation metrics | Experimental design and testing | Full body pose estimation | Industry and construction 4.0 | Machine learning algorithms | Network backbone architectures
مقاله انگلیسی
2 Plant leaf disease detection using computer vision and machine learning algorithms
تشخیص بیماری برگ گیاه با استفاده از بینایی کامپیوتری و الگوریتم های یادگیری ماشین-2022
Agriculture provides food to all the human beings even in case of rapid increase in the population. It is recom- mended to predict the plant diseases at their early stage in the field of agriculture is essential to cater the food to the overall population. But it unfortunate to predict the diseases at the early stage of the crops. The idea behind the paper is to bring awareness amongst the farmers about the cutting-edge technologies to reduces diseases in plant leaf. Since tomato is merely available vegetable, the approaches of machine learning and image processing with an accurate algorithm is identified to detect the leaf diseases in the tomato plant. In this investigation, the samples of tomato leaves having disorders are considered. With these disorder samples of tomato leaves, the farm- ers will easily find the diseases based on the early symptoms. Firstly, the samples of tomato leaves are resized to 256 × 256 pixels and then Histogram Equalization is used to improve the quality of tomato samples. The K-means clustering is introduced for partitioning of dataspace into Voronoi cells. The boundary of leaf samples is extracted using contour tracing. The multiple descriptors viz., Discrete Wavelet Transform, Principal Component Analysis and Grey Level Co-occurrence Matrix are used to extract the informative features of the leaf samples. Finally, the extracted features are classified using machine learning approaches such as Support Vector Machine (SVM), Convolutional Neural Network (CNN) and K-Nearest Neighbor (K-NN). The accuracy of the proposed model is tested using SVM (88%), K-NN (97%) and CNN (99.6%) on tomato disordered samples.
keywords: شبکه های عصبی کانولوشنال | تبدیل موجک گسسته | تجزیه و تحلیل مؤلفه های اصلی | نزدیکترین همسایه | بیماری برگ | Convolutional Neural Networks | Discrete Wavelet Transform | Principal Component Analysis | Nearest Neighbor | Leaf disease
مقاله انگلیسی
3 پیش بینی قیمت بیت کوین با استفاده از یادگیری ماشین: یک رویکر برای مهندسی ابعاد نمونه
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 32
پس از فراز و فرودهای قیمت های ارزهای رمزنگاری شده در سال های اخیر، بیت کوین به صورت فزاینده ای به عنوان یک دارایی برای سرمایه گذاری در نظر گرفته شده است. به خاطر ماهیت بسیار بی ثبات قیمت بیت کوین، لازم است تا پیش بینی های مناسبی صورت گیرد تا، بر اساس آن، بتوان در مورد سرمایه گذاری تصمیم گیری نمود. با وجودی که تحقیقات جاری برای پیش بینی دقیق تر قیمت بیت کوین از یادگیری ماشین استفاده کرده اند، تعداد اندکی از آنها به امکان استفاده از تکنیک های مختلف مدل سازی برای نمونه هایی با ساختار داده ای و ویژگی های بعدی مختلف توجه کرده اند. به منظور پیش بینی بهای بیت کوین در فرکانس های مختلف با استفاده از تکنیک های یادگیری ماشین، ابتدا قیمت بیت کوین را بر اساس قیمت روزانه و قیمت فرکانس بالا طبقه بندی می کنیم. مجموعه ای از ویژگی های با ابعاد بالا از جمله دارایی و شبکه، معاملات و بازار، توجه و قیمت لحظه ای طلا برای پیش بینی قیمت روزانه بیت کوین استفاده می شود، در حالی که ویژگی های اصلی تجارت که از تبادل ارز رمزنگاری شده حاصل شده اند، برای پیش بینی قیمت در فواصل 5 دقیقه ای استفاده می شوند. روشهای آماری شامل رگرسیون لجستیک و آنالیز افتراقی خطی برای پیش بینی قیمت روزانه بیت کوین با ویژگی های ابعاد بالا، به دقت 66٪ رسیده و از الگوریتم های یادگیری پیچیده تر ماشین پیشی می گیرند. در مقایسه با نتایج مبنا برای پیش بینی قیمت روزانه، با بالاترین دقت در روش های آماری و الگوریتم های یادگیری ماشینی، به ترتیب 66٪ و 3/65٪، به عملکرد بهتری دست پیدا می کنیم. مدلهای یادگیری ماشینی، شامل جنگل تصادفی ،XGBoost، آنالیز افتراقی درجه دو، ماشین بردار پشتیبان و حافظه کوتاه مدت بلند برای پیش بینی قیمت 5 دقیقه ای بیت کوین که دقت آنها به 67.2% رسیده است، از روشهای آماری بهتر هستند. بررسی ما در مورد پیش بینی قیمت بیت کوین را می توان مطالعه ای مقدماتی در مورد اهمیت ابعاد نمونه در تکنیک های یادگیری ماشین در نظر گرفت.
کلمات کلیدی: مهندسی ابعاد نمونه | اصل Occam’s Razor | پیش بینی قیمت بیت کوین | الگوریتم های یادگیری ماشین
مقاله ترجمه شده
4 پیش بینی قیمت بیت کوین با استفاده از یادگیری ماشین: یک رویکر برای مهندسی ابعاد نمونه
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 32
پس از فراز و فرودهای قیمت های ارزهای رمزنگاری شده در سال های اخیر، بیت کوین به صورت فزاینده ای به عنوان یک دارایی برای سرمایه گذاری در نظر گرفته شده است. به خاطر ماهیت بسیار بی ثبات قیمت بیت کوین، لازم است تا پیش بینی های مناسبی صورت گیرد تا، بر اساس آن، بتوان در مورد سرمایه گذاری تصمیم گیری نمود. با وجودی که تحقیقات جاری برای پیش بینی دقیق تر قیمت بیت کوین از یادگیری ماشین استفاده کرده اند، تعداد اندکی از آنها به امکان استفاده از تکنیک های مختلف مدل سازی برای نمونه هایی با ساختار داده ای و ویژگی های بعدی مختلف توجه کرده اند. به منظور پیش بینی بهای بیت کوین در فرکانس های مختلف با استفاده از تکنیک های یادگیری ماشین، ابتدا قیمت بیت کوین را بر اساس قیمت روزانه و قیمت فرکانس بالا طبقه بندی می کنیم. مجموعه ای از ویژگی های با ابعاد بالا از جمله دارایی و شبکه، معاملات و بازار، توجه و قیمت لحظه ای طلا برای پیش بینی قیمت روزانه بیت کوین استفاده می شود، در حالی که ویژگی های اصلی تجارت که از تبادل ارز رمزنگاری شده حاصل شده اند، برای پیش بینی قیمت در فواصل 5 دقیقه ای استفاده می شوند. روشهای آماری شامل رگرسیون لجستیک و آنالیز افتراقی خطی برای پیش بینی قیمت روزانه بیت کوین با ویژگی های ابعاد بالا، به دقت 66٪ رسیده و از الگوریتم های یادگیری پیچیده تر ماشین پیشی می گیرند. در مقایسه با نتایج مبنا برای پیش بینی قیمت روزانه، با بالاترین دقت در روش های آماری و الگوریتم های یادگیری ماشینی، به ترتیب 66٪ و 3/65٪، به عملکرد بهتری دست پیدا می کنیم. مدلهای یادگیری ماشینی، شامل جنگل تصادفی ،XGBoost، آنالیز افتراقی درجه دو، ماشین بردار پشتیبان و حافظه کوتاه مدت بلند برای پیش بینی قیمت 5 دقیقه ای بیت کوین که دقت آنها به 67.2% رسیده است، از روشهای آماری بهتر هستند. بررسی ما در مورد پیش بینی قیمت بیت کوین را می توان مطالعه ای مقدماتی در مورد اهمیت ابعاد نمونه در تکنیک های یادگیری ماشین در نظر گرفت.
کلمات کلیدی: مهندسی ابعاد نمونه | اصل Occam’s Razor | پیش بینی قیمت بیت کوین | الگوریتم های یادگیری ماشین
مقاله ترجمه شده
5 Identifying influential factors distinguishing recidivists among offender patients with a diagnosis of schizophrenia via machine learning algorithms
شناسایی عوامل موثر در تشخیص تکرار مجدد در بین بیماران مجرم با تشخیص اسکیزوفرنی از طریق الگوریتم های یادگیری ماشین-2020
Purpose: There is a lack of research on predictors of criminal recidivism of offender patients diagnosed with schizophrenia. Methods: 653 potential predictor variables were anlyzed in a set of 344 offender patients with a diagnosis of schizophrenia (209 reconvicted) using machine learning algorithms. As a novel methodological approach, null hypothesis significance testing (NHST), backward selection, logistic regression, trees, support vector machines (SVM), and naive bayes were used for preselecting variables. Subsequently the variables identified as most influential were used for machine learning algorithm building and evaluation. Results: The two final models (with/without imputation) predicted criminal recidivism with an accuracy of 81.7 % and 70.6 % and a predictive power (area under the curve, AUC) of 0.89 and 0.76 based on the following predictors: prescription of amisulpride prior to reoffending, suspended sentencing to imprisonment, legal complaints filed by relatives/therapists/public authorities, recent legal issues, number of offences leading to forensic treatment, anxiety upon discharge, being single, violence toward care team and constant breaking of rules during treatment, illegal opioid use, middle east as place of birth, and time span since the last psychiatric inpatient treatment. Conclusion: Results provide new insight on possible factors influencing persistent offending in a specific subgroup of patients with a schizophrenic spectrum disorder.
Keywords: Criminal justice | Criminal recidivism | Machine learning | Offender | Schizophrenia
مقاله انگلیسی
6 Leveraging Google Earth Engine (GEE) and machine learning algorithms to incorporate in situ measurement from different times for rangelands monitoring
اهرم موتور زمین گوگل و الگوریتم های یادگیری ماشین برای ترکیب در اندازه گیری درجا از زمان های مختلف برای نظارت بر مراتع-2020
Mapping and monitoring of indicators of soil cover, vegetation structure, and various native and non-native species is a critical aspect of rangeland management. With the advancement in satellite imagery as well as cloud storage and computing, the capability now exists to conduct planetary-scale analysis, including mapping of rangeland indicators. Combined with recent investments in the collection of large amounts of in situ data in the western U.S., new approaches using machine learning can enable prediction of surface conditions at times and places when no in situ data are available. However, little analysis has yet been done on how the temporal relevancy of training data influences model performance. Here, we have leveraged the Google Earth Engine (GEE) platform and a machine learning algorithm (Random Forest, after comparison with other candidates) to identify the potential impact of different sampling times (across months and years) on estimation of rangeland indicators from the Bureau of Land Managements (BLM) Assessment, Inventory, and Monitoring (AIM) and Landscape Monitoring Framework (LMF) programs. Our results indicate that temporally relevant training data improves predictions, though the training data need not be from the exact same month and year for a prediction to be temporally relevant. Moreover, inclusion of training data from the time when predictions are desired leads to lower prediction error but the addition of training data from other times does not contribute to overall model error. Using all of the available training data can lead to biases, toward the mean, for times when indicator values are especially high or low. However, for mapping purposes, limiting training data to just the time when predictions are desired can lead to poor predictions of values outside the spatial range of the training data for that period. We conclude that the best Random Forest prediction maps will use training data from all possible times with the understanding that estimates at the extremes will be biased.
Keywords: Google earth engine | Big data | Machine learning | Domain adaptation | Transfer learning | Feature selection | Rangeland monitoring
مقاله انگلیسی
7 Oil palm mapping over Peninsular Malaysia using Google Earth Engine and machine learning algorithms
نقشه برداری روغن نخل در شبه جزیره مالزی با استفاده از موتور زمین گوگل و الگوریتم های یادگیری ماشین-2020
Oil palm plays a pivotal role in the ecosystem, environment, economy and without proper monitoring, uncontrolled oil palm activities could contribute to deforestation that can cause high negative impacts on the environment and therefore, proper management and monitoring of the oil palm industry are necessary. Mapping the distribution of oil palm is crucial in order to manage and plan the sustainable operations of oil palm plantations. Remote sensing provides a means to detect and map oil palm from space effectively. Recent advances in cloud computing and big data allow rapid mapping to be performed over large a geographical scale. In this study, 30 m Landsat 8 data were processed using a cloud computing platform of Google Earth Engine (GEE) in order to classify oil palm land cover using non-parametric machine learning algorithms such as Support Vector Machine (SVM), Classification and Regression Tree (CART) and Random Forest (RF) for the first time over Peninsular Malaysia. The hyperparameters were tuned, and the overall accuracy produced by the SVM, CART and RF were 93.16%, 80.08% and 86.50% respectively. Overall, the SVM classified the 7 classes (water, built-up, bare soil, forest, oil palm, other vegetation and paddy) the best. However, RF extracted oil palm information better than the SVM. The algorithms were compared and the McNemar’s test showed significant values for comparisons between SVM and CART and RF and CART. On the other hand, the performance of SVM and RF are considered equally effective. Despite the challenges in implementing machine learning optimisation using GEE over a large area, this paper shows the efficiency of GEE as a cloud-based free platform to perform bioresource distributions mapping such as oil palm over a large area in Peninsular Malaysia.
Keywords: cloud computing | image classification | Landsat | machine learning | oil palm
مقاله انگلیسی
8 Development of machine learning algorithms for prediction of mortality in spinal epidural abscess
توسعه الگوریتم های یادگیری ماشین برای پیش بینی مرگ و میر در آبسه اپیدورال ستون فقرات-2019
BACKGROUND CONTEXT: In-hospital and short-term mortality in patients with spinal epidural abscess (SEA) remains unacceptably high despite diagnostic and therapeutic advancements. Forecasting this potentially avoidable consequence at the time of admission could improve patient management and counseling. Few studies exist to meet this need, and none have explored methodologies such as machine learning. PURPOSE: The purpose of this study was to develop machine learning algorithms for prediction of in-hospital and 90-day postdischarge mortality in SEA. STUDY DESIGN/SETTING: Retrospective, case-control study at two academic medical centers and three community hospitals from 1993 to 2016. PATIENTS SAMPLE: Adult patients with an inpatient admission for radiologically confirmed diagnosis of SEA. OUTCOME MEASURES: In-hospital and 90-day postdischarge mortality. METHODS: Five machine learning algorithms (elastic-net penalized logistic regression, random forest, stochastic gradient boosting, neural network, and support vector machine) were developed and assessed by discrimination, calibration, overall performance, and decision curve analysis. RESULTS: Overall, 1,053 SEA patients were identified in the study, with 134 (12.7%) experiencing in-hospital or 90-day postdischarge mortality. The stochastic gradient boosting model achieved the best performance across discrimination, c-statistic=0.89, calibration, and decision curve analysis. The variables used for prediction of 90-day mortality, ranked by importance, were age, albumin, platelet count, neutrophil to lymphocyte ratio, hemodialysis, active malignancy, and diabetes. The final algorithm was incorporated into a web application available here: https://sorg-apps.shinyapps.io/seamortality/. CONCLUSIONS: Machine learning algorithms show promise on internal validation for prediction of 90-day mortality in SEA. Future studies are needed to externally validate these algorithms inindependent populations.
Keywords: Artificial intelligence | Healthcare | Machine learning | Mortality | Spinal epidural abscess | Spine surgery
مقاله انگلیسی
9 Evidence-based clinical engineering: Machine learning algorithms for prediction of defibrillator performance
مهندسی بالینی مبتنی بر شواهد: الگوریتم های یادگیری ماشین برای پیش بینی عملکرد دفیبریلاتور-2019
tPoorly regulated and insufficiently supervised medical devices (MDs) carry high risk of performanceaccuracy and safety deviations effecting the clinical accuracy and efficiency of patient diagnosis and treat-ments. Even with the increase of technological sophistication of devices, incidents involving defibrillatormalfunction are unfortunately not rare.To address this, we have developed an automated system based on machine learning algorithms thatcan predict performance of defibrillators and possible performance failures of the device which can affectperformance. To develop an automated system, with high accuracy, overall dataset containing safety andperformance measurements data was acquired from periodical safety and performance inspections of1221 defibrillator. These inspections were carried out in period 2015–2017 in private and public health-care institutions in Bosnia and Herzegovina by ISO 17,020 accredited laboratory. Out of overall number ofsamples, 974 of them were used during system development and 247 samples were used for subsequentvalidation of system performance. During system development, 5 different machine learning algorithmswere used, and resulting systems were compared by obtained performance.The results of this study demonstrate that clinical engineering and health technology managementbenefit from application of machine learning in terms of cost optimization and medical device manage-ment. Automated systems, based on machine learning algorithms, can predict defibrillator performancewith high accuracy. Systems based on Random Forest classifier with Genetic Algorithm feature selectionyielded highest accuracy among other machine learning systems. Adoption of such systems will help inovercoming challenges of adapting maintenance and medical device supervision mechanism protocolsto rapid technological development of these devices. Due to increased complexity of healthcare institu-tion environment and increased technological complexity of medical devices, performing maintenancestrategies in traditional manner is causing a lot of difficulties.
Keywords:Automated system | Machine learning | Medical device | Maintenance | Managemen | tPrediction | Performance | Inspection | Evidence-based
مقاله انگلیسی
10 TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set
TOP-GAN: طبقه بندی سلول های سرطانی بدون لکه با استفاده از یادگیری عمیق با یک مجموعه آموزشی کوچک-2019
We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cell lines acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is hybridization between transfer learning and generative ad- versarial networks (GANs). Healthy cells and cancer cells of different metastatic potential have been im- aged by low-coherence off-axis holography. After the acquisition, the optical path delay maps of the cells are extracted and directly used as inputs to the networks. In order to cope with the small number of clas- sified images, we use GANs to train a large number of unclassified images from another cell type (sperm cells). After this preliminary training, we change the last layers of the network and design automatic classifiers for the correct cell type (healthy/primary cancer/metastatic cancer) with 90–99% accuracies, although small training sets of down to several images are used. These results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. We believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. Furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set.
Keywords: Holography | Quantitative phase imaging | Deep learning | Machine learning algorithms | Image classification | Biological cells
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 452 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 452 :::::::: افراد آنلاین: 64