دانلود و نمایش مقالات مرتبط با الگوریتم های یادگیری ماشین::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - الگوریتم های یادگیری ماشین

تعداد مقالات یافته شده: 23
ردیف عنوان نوع
1 پیش بینی قیمت بیت کوین با استفاده از یادگیری ماشین: یک رویکر برای مهندسی ابعاد نمونه
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 32
پس از فراز و فرودهای قیمت های ارزهای رمزنگاری شده در سال های اخیر، بیت کوین به صورت فزاینده ای به عنوان یک دارایی برای سرمایه گذاری در نظر گرفته شده است. به خاطر ماهیت بسیار بی ثبات قیمت بیت کوین، لازم است تا پیش بینی های مناسبی صورت گیرد تا، بر اساس آن، بتوان در مورد سرمایه گذاری تصمیم گیری نمود. با وجودی که تحقیقات جاری برای پیش بینی دقیق تر قیمت بیت کوین از یادگیری ماشین استفاده کرده اند، تعداد اندکی از آنها به امکان استفاده از تکنیک های مختلف مدل سازی برای نمونه هایی با ساختار داده ای و ویژگی های بعدی مختلف توجه کرده اند. به منظور پیش بینی بهای بیت کوین در فرکانس های مختلف با استفاده از تکنیک های یادگیری ماشین، ابتدا قیمت بیت کوین را بر اساس قیمت روزانه و قیمت فرکانس بالا طبقه بندی می کنیم. مجموعه ای از ویژگی های با ابعاد بالا از جمله دارایی و شبکه، معاملات و بازار، توجه و قیمت لحظه ای طلا برای پیش بینی قیمت روزانه بیت کوین استفاده می شود، در حالی که ویژگی های اصلی تجارت که از تبادل ارز رمزنگاری شده حاصل شده اند، برای پیش بینی قیمت در فواصل 5 دقیقه ای استفاده می شوند. روشهای آماری شامل رگرسیون لجستیک و آنالیز افتراقی خطی برای پیش بینی قیمت روزانه بیت کوین با ویژگی های ابعاد بالا، به دقت 66٪ رسیده و از الگوریتم های یادگیری پیچیده تر ماشین پیشی می گیرند. در مقایسه با نتایج مبنا برای پیش بینی قیمت روزانه، با بالاترین دقت در روش های آماری و الگوریتم های یادگیری ماشینی، به ترتیب 66٪ و 3/65٪، به عملکرد بهتری دست پیدا می کنیم. مدلهای یادگیری ماشینی، شامل جنگل تصادفی ،XGBoost، آنالیز افتراقی درجه دو، ماشین بردار پشتیبان و حافظه کوتاه مدت بلند برای پیش بینی قیمت 5 دقیقه ای بیت کوین که دقت آنها به 67.2% رسیده است، از روشهای آماری بهتر هستند. بررسی ما در مورد پیش بینی قیمت بیت کوین را می توان مطالعه ای مقدماتی در مورد اهمیت ابعاد نمونه در تکنیک های یادگیری ماشین در نظر گرفت.
کلمات کلیدی: مهندسی ابعاد نمونه | اصل Occam’s Razor | پیش بینی قیمت بیت کوین | الگوریتم های یادگیری ماشین
مقاله ترجمه شده
2 پیش بینی قیمت بیت کوین با استفاده از یادگیری ماشین: یک رویکر برای مهندسی ابعاد نمونه
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 32
پس از فراز و فرودهای قیمت های ارزهای رمزنگاری شده در سال های اخیر، بیت کوین به صورت فزاینده ای به عنوان یک دارایی برای سرمایه گذاری در نظر گرفته شده است. به خاطر ماهیت بسیار بی ثبات قیمت بیت کوین، لازم است تا پیش بینی های مناسبی صورت گیرد تا، بر اساس آن، بتوان در مورد سرمایه گذاری تصمیم گیری نمود. با وجودی که تحقیقات جاری برای پیش بینی دقیق تر قیمت بیت کوین از یادگیری ماشین استفاده کرده اند، تعداد اندکی از آنها به امکان استفاده از تکنیک های مختلف مدل سازی برای نمونه هایی با ساختار داده ای و ویژگی های بعدی مختلف توجه کرده اند. به منظور پیش بینی بهای بیت کوین در فرکانس های مختلف با استفاده از تکنیک های یادگیری ماشین، ابتدا قیمت بیت کوین را بر اساس قیمت روزانه و قیمت فرکانس بالا طبقه بندی می کنیم. مجموعه ای از ویژگی های با ابعاد بالا از جمله دارایی و شبکه، معاملات و بازار، توجه و قیمت لحظه ای طلا برای پیش بینی قیمت روزانه بیت کوین استفاده می شود، در حالی که ویژگی های اصلی تجارت که از تبادل ارز رمزنگاری شده حاصل شده اند، برای پیش بینی قیمت در فواصل 5 دقیقه ای استفاده می شوند. روشهای آماری شامل رگرسیون لجستیک و آنالیز افتراقی خطی برای پیش بینی قیمت روزانه بیت کوین با ویژگی های ابعاد بالا، به دقت 66٪ رسیده و از الگوریتم های یادگیری پیچیده تر ماشین پیشی می گیرند. در مقایسه با نتایج مبنا برای پیش بینی قیمت روزانه، با بالاترین دقت در روش های آماری و الگوریتم های یادگیری ماشینی، به ترتیب 66٪ و 3/65٪، به عملکرد بهتری دست پیدا می کنیم. مدلهای یادگیری ماشینی، شامل جنگل تصادفی ،XGBoost، آنالیز افتراقی درجه دو، ماشین بردار پشتیبان و حافظه کوتاه مدت بلند برای پیش بینی قیمت 5 دقیقه ای بیت کوین که دقت آنها به 67.2% رسیده است، از روشهای آماری بهتر هستند. بررسی ما در مورد پیش بینی قیمت بیت کوین را می توان مطالعه ای مقدماتی در مورد اهمیت ابعاد نمونه در تکنیک های یادگیری ماشین در نظر گرفت.
کلمات کلیدی: مهندسی ابعاد نمونه | اصل Occam’s Razor | پیش بینی قیمت بیت کوین | الگوریتم های یادگیری ماشین
مقاله ترجمه شده
3 Identifying influential factors distinguishing recidivists among offender patients with a diagnosis of schizophrenia via machine learning algorithms
شناسایی عوامل موثر در تشخیص تکرار مجدد در بین بیماران مجرم با تشخیص اسکیزوفرنی از طریق الگوریتم های یادگیری ماشین-2020
Purpose: There is a lack of research on predictors of criminal recidivism of offender patients diagnosed with schizophrenia. Methods: 653 potential predictor variables were anlyzed in a set of 344 offender patients with a diagnosis of schizophrenia (209 reconvicted) using machine learning algorithms. As a novel methodological approach, null hypothesis significance testing (NHST), backward selection, logistic regression, trees, support vector machines (SVM), and naive bayes were used for preselecting variables. Subsequently the variables identified as most influential were used for machine learning algorithm building and evaluation. Results: The two final models (with/without imputation) predicted criminal recidivism with an accuracy of 81.7 % and 70.6 % and a predictive power (area under the curve, AUC) of 0.89 and 0.76 based on the following predictors: prescription of amisulpride prior to reoffending, suspended sentencing to imprisonment, legal complaints filed by relatives/therapists/public authorities, recent legal issues, number of offences leading to forensic treatment, anxiety upon discharge, being single, violence toward care team and constant breaking of rules during treatment, illegal opioid use, middle east as place of birth, and time span since the last psychiatric inpatient treatment. Conclusion: Results provide new insight on possible factors influencing persistent offending in a specific subgroup of patients with a schizophrenic spectrum disorder.
Keywords: Criminal justice | Criminal recidivism | Machine learning | Offender | Schizophrenia
مقاله انگلیسی
4 Leveraging Google Earth Engine (GEE) and machine learning algorithms to incorporate in situ measurement from different times for rangelands monitoring
اهرم موتور زمین گوگل و الگوریتم های یادگیری ماشین برای ترکیب در اندازه گیری درجا از زمان های مختلف برای نظارت بر مراتع-2020
Mapping and monitoring of indicators of soil cover, vegetation structure, and various native and non-native species is a critical aspect of rangeland management. With the advancement in satellite imagery as well as cloud storage and computing, the capability now exists to conduct planetary-scale analysis, including mapping of rangeland indicators. Combined with recent investments in the collection of large amounts of in situ data in the western U.S., new approaches using machine learning can enable prediction of surface conditions at times and places when no in situ data are available. However, little analysis has yet been done on how the temporal relevancy of training data influences model performance. Here, we have leveraged the Google Earth Engine (GEE) platform and a machine learning algorithm (Random Forest, after comparison with other candidates) to identify the potential impact of different sampling times (across months and years) on estimation of rangeland indicators from the Bureau of Land Managements (BLM) Assessment, Inventory, and Monitoring (AIM) and Landscape Monitoring Framework (LMF) programs. Our results indicate that temporally relevant training data improves predictions, though the training data need not be from the exact same month and year for a prediction to be temporally relevant. Moreover, inclusion of training data from the time when predictions are desired leads to lower prediction error but the addition of training data from other times does not contribute to overall model error. Using all of the available training data can lead to biases, toward the mean, for times when indicator values are especially high or low. However, for mapping purposes, limiting training data to just the time when predictions are desired can lead to poor predictions of values outside the spatial range of the training data for that period. We conclude that the best Random Forest prediction maps will use training data from all possible times with the understanding that estimates at the extremes will be biased.
Keywords: Google earth engine | Big data | Machine learning | Domain adaptation | Transfer learning | Feature selection | Rangeland monitoring
مقاله انگلیسی
5 Oil palm mapping over Peninsular Malaysia using Google Earth Engine and machine learning algorithms
نقشه برداری روغن نخل در شبه جزیره مالزی با استفاده از موتور زمین گوگل و الگوریتم های یادگیری ماشین-2020
Oil palm plays a pivotal role in the ecosystem, environment, economy and without proper monitoring, uncontrolled oil palm activities could contribute to deforestation that can cause high negative impacts on the environment and therefore, proper management and monitoring of the oil palm industry are necessary. Mapping the distribution of oil palm is crucial in order to manage and plan the sustainable operations of oil palm plantations. Remote sensing provides a means to detect and map oil palm from space effectively. Recent advances in cloud computing and big data allow rapid mapping to be performed over large a geographical scale. In this study, 30 m Landsat 8 data were processed using a cloud computing platform of Google Earth Engine (GEE) in order to classify oil palm land cover using non-parametric machine learning algorithms such as Support Vector Machine (SVM), Classification and Regression Tree (CART) and Random Forest (RF) for the first time over Peninsular Malaysia. The hyperparameters were tuned, and the overall accuracy produced by the SVM, CART and RF were 93.16%, 80.08% and 86.50% respectively. Overall, the SVM classified the 7 classes (water, built-up, bare soil, forest, oil palm, other vegetation and paddy) the best. However, RF extracted oil palm information better than the SVM. The algorithms were compared and the McNemar’s test showed significant values for comparisons between SVM and CART and RF and CART. On the other hand, the performance of SVM and RF are considered equally effective. Despite the challenges in implementing machine learning optimisation using GEE over a large area, this paper shows the efficiency of GEE as a cloud-based free platform to perform bioresource distributions mapping such as oil palm over a large area in Peninsular Malaysia.
Keywords: cloud computing | image classification | Landsat | machine learning | oil palm
مقاله انگلیسی
6 Development of machine learning algorithms for prediction of mortality in spinal epidural abscess
توسعه الگوریتم های یادگیری ماشین برای پیش بینی مرگ و میر در آبسه اپیدورال ستون فقرات-2019
BACKGROUND CONTEXT: In-hospital and short-term mortality in patients with spinal epidural abscess (SEA) remains unacceptably high despite diagnostic and therapeutic advancements. Forecasting this potentially avoidable consequence at the time of admission could improve patient management and counseling. Few studies exist to meet this need, and none have explored methodologies such as machine learning. PURPOSE: The purpose of this study was to develop machine learning algorithms for prediction of in-hospital and 90-day postdischarge mortality in SEA. STUDY DESIGN/SETTING: Retrospective, case-control study at two academic medical centers and three community hospitals from 1993 to 2016. PATIENTS SAMPLE: Adult patients with an inpatient admission for radiologically confirmed diagnosis of SEA. OUTCOME MEASURES: In-hospital and 90-day postdischarge mortality. METHODS: Five machine learning algorithms (elastic-net penalized logistic regression, random forest, stochastic gradient boosting, neural network, and support vector machine) were developed and assessed by discrimination, calibration, overall performance, and decision curve analysis. RESULTS: Overall, 1,053 SEA patients were identified in the study, with 134 (12.7%) experiencing in-hospital or 90-day postdischarge mortality. The stochastic gradient boosting model achieved the best performance across discrimination, c-statistic=0.89, calibration, and decision curve analysis. The variables used for prediction of 90-day mortality, ranked by importance, were age, albumin, platelet count, neutrophil to lymphocyte ratio, hemodialysis, active malignancy, and diabetes. The final algorithm was incorporated into a web application available here: https://sorg-apps.shinyapps.io/seamortality/. CONCLUSIONS: Machine learning algorithms show promise on internal validation for prediction of 90-day mortality in SEA. Future studies are needed to externally validate these algorithms inindependent populations.
Keywords: Artificial intelligence | Healthcare | Machine learning | Mortality | Spinal epidural abscess | Spine surgery
مقاله انگلیسی
7 Evidence-based clinical engineering: Machine learning algorithms for prediction of defibrillator performance
مهندسی بالینی مبتنی بر شواهد: الگوریتم های یادگیری ماشین برای پیش بینی عملکرد دفیبریلاتور-2019
tPoorly regulated and insufficiently supervised medical devices (MDs) carry high risk of performanceaccuracy and safety deviations effecting the clinical accuracy and efficiency of patient diagnosis and treat-ments. Even with the increase of technological sophistication of devices, incidents involving defibrillatormalfunction are unfortunately not rare.To address this, we have developed an automated system based on machine learning algorithms thatcan predict performance of defibrillators and possible performance failures of the device which can affectperformance. To develop an automated system, with high accuracy, overall dataset containing safety andperformance measurements data was acquired from periodical safety and performance inspections of1221 defibrillator. These inspections were carried out in period 2015–2017 in private and public health-care institutions in Bosnia and Herzegovina by ISO 17,020 accredited laboratory. Out of overall number ofsamples, 974 of them were used during system development and 247 samples were used for subsequentvalidation of system performance. During system development, 5 different machine learning algorithmswere used, and resulting systems were compared by obtained performance.The results of this study demonstrate that clinical engineering and health technology managementbenefit from application of machine learning in terms of cost optimization and medical device manage-ment. Automated systems, based on machine learning algorithms, can predict defibrillator performancewith high accuracy. Systems based on Random Forest classifier with Genetic Algorithm feature selectionyielded highest accuracy among other machine learning systems. Adoption of such systems will help inovercoming challenges of adapting maintenance and medical device supervision mechanism protocolsto rapid technological development of these devices. Due to increased complexity of healthcare institu-tion environment and increased technological complexity of medical devices, performing maintenancestrategies in traditional manner is causing a lot of difficulties.
Keywords:Automated system | Machine learning | Medical device | Maintenance | Managemen | tPrediction | Performance | Inspection | Evidence-based
مقاله انگلیسی
8 TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set
TOP-GAN: طبقه بندی سلول های سرطانی بدون لکه با استفاده از یادگیری عمیق با یک مجموعه آموزشی کوچک-2019
We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cell lines acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is hybridization between transfer learning and generative ad- versarial networks (GANs). Healthy cells and cancer cells of different metastatic potential have been im- aged by low-coherence off-axis holography. After the acquisition, the optical path delay maps of the cells are extracted and directly used as inputs to the networks. In order to cope with the small number of clas- sified images, we use GANs to train a large number of unclassified images from another cell type (sperm cells). After this preliminary training, we change the last layers of the network and design automatic classifiers for the correct cell type (healthy/primary cancer/metastatic cancer) with 90–99% accuracies, although small training sets of down to several images are used. These results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. We believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. Furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set.
Keywords: Holography | Quantitative phase imaging | Deep learning | Machine learning algorithms | Image classification | Biological cells
مقاله انگلیسی
9 Solving the motion planning problem using learning experience through case-based reasoning and machine learning algorithms
حل مسئله برنامه ریزی حرکت با استفاده از تجربه یادگیری از طریق استدلال مبتنی بر مورد و الگوریتم های یادگیری ماشین-2019
This article presents two novel methodologies for solving the motion planning problem through retained experience. Both approaches employ AI’s case-based reasoning (CBR) technique. Case-based reasoning is an expert system development methodology which reuses past solutions to solve new problems. The first approach uses CBR to retain K similar cases to solve the motion planning problem by merging those solutions into a set. Afterwards, it picks from this set based on a heuristic function to assemble a final solution. Regarding the second approach, it employs the retained K similar cases differently. It uses those solution to build a graph which can be queried using traditional graph search algorithms. Results prove the success of such approaches concerning solution quality and success rate compared to different experience-based algorithms. Such utilization for CBR systems develops new research directions for building systems that can solve NP problems based on retained experiences exclusively.
Keywords: Sampling-based algorithms | Experience-based algorithms | Case-based reasoning Artificial intelligence | Motion planning
مقاله انگلیسی
10 A review of machine learning algorithms for identification and classification of non-functional requirements
مروری بر الگوریتم های یادگیری ماشین برای شناسایی و طبقه بندی نیازمندی های کاربردی-2019
Context: Recent developments in requirements engineering (RE) methods have seen a surge in using machine-learning (ML) algorithms to solve some difficult RE problems. One such problem is identifi- cation and classification of non-functional requirements (NFRs) in requirements documents. ML-based approaches to this problem have shown to produce promising results, better than those produced by traditional natural language processing (NLP) approaches. Yet, a systematic understanding of these ML approaches is still lacking. Method: This article reports on a systematic review of 24 ML-based approaches for identifying and clas- sifying NFRs. Directed by three research questions, this article aims to understand what ML algorithms are used in these approaches, how these algorithms work and how they are evaluated. Results: (1) 16 different ML algorithms are found in these approaches; of which supervised learning algorithms are most popular. (2) All 24 approaches have followed a standard process in identifying and classifying NFRs. (3) Precision and recall are the most used matrices to measure the performance of these approaches. Finding: The review finds that while ML-based approaches have the potential in the classification and identification of NFRs, they face some open challenges that will affect their performance and practical application. Impact: The review calls for the close collaboration between RE and ML researchers, to address open challenges facing the development of real-world ML systems. Significance: The use of ML in RE opens up exciting opportunities to develop novel expert and intelligent systems to support RE tasks and processes. This implies that RE is being transformed into an application of modern expert systems.
Keywords: Requirements engineering | Non-functional requirements | Requirements documents | Requirements identification Requirements | classification | Machine learning
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi