دانلود و نمایش مقالات مرتبط با دقت::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - دقت

تعداد مقالات یافته شده: 275
ردیف عنوان نوع
1 iRestroom : A smart restroom cyberinfrastructure for elderly people
iRestroom: زیرساخت سایبری سرویس بهداشتی هوشمند برای افراد مسن-2022
According to a report by UN and WHO, by 2030 the number of senior people (age over 65) is projected to grow up to 1.4 billion, and which is nearly 16.5% of the global population. Seniors who live alone must have their health state closely monitored to avoid unexpected events (such as a fall). This study explains the underlying principles, methodology, and research that went into developing the concept, as well as the need for and scopes of a restroom cyberinfrastructure system, that we call as iRestroom to assess the frailty of elderly people for them to live a comfortable, independent, and secure life at home. The proposed restroom idea is based on the required situations, which are determined by user study, socio-cultural and technological trends, and user requirements. The iRestroom is designed as a multi-sensory place with interconnected devices where carriers of older persons can access interactive material and services throughout their everyday activities. The prototype is then tested at Texas A&M University-Kingsville. A Nave Bayes classifier is utilized to anticipate the locations of the sensors, which serves to provide a constantly updated reference for the data originating from numerous sensors and devices installed in different locations throughout the restroom. A small sample of pilot data was obtained, as well as pertinent web data. The Institutional Review Board (IRB) has approved all the methods.
keywords: اینترنت اشیا | حسگرها | نگهداری از سالمندان | سیستم های هوشمند | یادگیری ماشین | IoT | Sensors | Elder Care | Smart Systems | Machine Learning
مقاله انگلیسی
2 High-accuracy in the classification of butchery cut marks and crocodile tooth marks using machine learning methods and computer vision algorithms
دقت بالا در طبقه بندی علائم برش قصابی و علائم دندان تمساح با استفاده از روش های یادگیری ماشین و الگوریتم های بینایی کامپیوتری-2022
Some researchers using traditional taphonomic criteria (groove shape and presence/absence of microstriations) have cast some doubts about the potential equifinality presented by crocodile tooth marks and stone tool butchery cut marks. Other researchers have argued that multivariate methods can efficiently separate both types of marks. Differentiating both taphonomic agents is crucial for determining the earliest evidence of carcass processing by hominins. Here, we use an updated machine learning approach (discarding artificially bootstrapping the original imbalanced samples) to show that microscopic features shaped as categorical variables, corresponding to intrinsic properties of mark structure, can accurately discriminate both types of bone modifications. We also implement new deep-learning methods that objectively achieve the highest accuracy in differentiating cut marks from crocodile tooth scores (99% of testing sets). The present study shows that there are precise ways of differentiating both taphonomic agents, and this invites taphonomists to apply them to controversial paleontological and archaeological specimens.
keywords: تافونومی | علائم برش | علائم دندان | فراگیری ماشین | یادگیری عمیق | شبکه های عصبی کانولوشنال | قصابی | Taphonomy | Cut marks | Tooth marks | Machine learning | Deep learning | Convolutional neural networks | Butchery
مقاله انگلیسی
3 Image2Triplets: A computer vision-based explicit relationship extraction framework for updating construction activity knowledge graphs
Image2Triplets: چارچوب استخراج رابطه صریح مبتنی بر بینایی ماشین برای به روز رسانی نمودارهای دانش فعالیت های ساخت-2022
Knowledge graph (KG) is an effective tool for knowledge management, particularly in the architecture, engineering and construction (AEC) industry, where knowledge is fragmented and complicated. However, research on KG updates in the industry is scarce, with most current research focusing on text-based KG updates. Considering the superiority of visual data over textual data in terms of accuracy and timeliness, the potential of computer vision technology for explicit relationship extraction in KG updates is yet to be ex- plored. This paper combines zero-shot human-object interaction detection techniques with general KGs to propose a novel framework called Image2Triplets that can extract explicit visual relationships from images to update the construction activity KG. Comprehensive experiments on the images of architectural dec- oration processes have been performed to validate the proposed framework. The results and insights will contribute new knowledge and evidence to human-object interaction detection, KG update and construc- tion informatics from the theoretical perspective. © 2022 Elsevier B.V. All rights reserved.
keywords: یادگیری شات صفر | تشخیص تعامل انسان و شی | بینایی ماشین| استخراج رابطه صریح | نمودار دانش | Zero-shot learning | Human-object interaction detection | Computer vision | Explicit relationship extraction | Knowledge graph
مقاله انگلیسی
4 A novel method of fish tail fin removal for mass estimation using computer vision
یک روش جدید حذف باله دم ماهی برای تخمین جرم با استفاده از بینایی کامپیوتر-2022
Fish mass estimation is extremely important for farmers to get fish biomass information, which could be useful to optimize daily feeding and control stocking densities and ultimately determine optimal harvest time. However, fish tail fin mass does not contribute much to total body mass. Additionally, the tail fin of free-swimming fish is deformed or bent for most of the time, resulting in feature measurement errors and further affecting mass prediction accuracy by computer vision. To solve this problem, a novel non-supervised method for fish tail fin removal was proposed to further develop mass prediction models based on ventral geometrical features without tail fin. Firstly, fish tail fin was fully automatically removed using the Cartesian coordinate system and image processing. Secondly, the different features were respectively extracted from fish image with and without tail fin. Finally, the correlational relationship between fish mass and features was estimated by the Partial Least Square (PLS). In this paper, tail fins were completely automatically removed and mass estimation model based on area and area square has been the best tested on the test dataset with a high coefficient of determination (R2) of 0.991, the root mean square error (RMSE) of 7.10 g, the mean absolute error (MAE) of 5.36 g and the maximum relative error (MaxRE) of 8.46%. These findings indicated that mass prediction model without fish tail fin can more accurately estimate fish mass than the model with tail fin, which might be extended to estimate biomass of free- swimming fish underwater in aquaculture.
keywords: برداشتن باله دم | اتوماسیون | ماهی | تخمین انبوه | بینایی کامپیوتر | Tail fin removal | Automation | Fish | Mass estimation | Computer vision
مقاله انگلیسی
5 ChickenNet - an end-to-end approach for plumage condition assessment of laying hens in commercial farms using computer vision
ChickenNet - یک رویکرد انتها به انتها برای ارزیابی وضعیت پرهای مرغ های تخمگذار در مزارع تجاری با استفاده از بینایی کامپیوتر-2022
Regular plumage condition assessment in laying hens is essential to monitor the hens’ welfare status and to detect the occurrence of feather pecking activities. However, in commercial farms this is a labor-intensive, manual task. This study proposes a novel approach for automated plumage condition assessment using com- puter vision and deep learning. It presents ChickenNet, an end-to-end convolutional neural network that detects hens and simultaneously predicts a plumage condition score for each detected hen. To investigate the effect of input image characteristics, the method was evaluated using images with and without depth information in resolutions of 384 × 384, 512 × 512, 896 × 896 and 1216 × 1216 pixels. Further, to determine the impact of subjective human annotations, plumage condition predictions were compared to manual assessments of one observer and to matching annotations of two observers. Among all tested settings, performance metrics based on matching manual annotations of two observers were equal or better than the ones based on annotations of a single observer. The best result obtained among all tested configurations was a mean average precision (mAP) of 98.02% for hen detection while 91.83% of the plumage condition scores were predicted correctly. Moreover, it was revealed that performance of hen detection and plumage condition assessment of ChickenNet was not generally enhanced by depth information. Increasing image resolutions improved plumage assessment up to a resolution of 896 × 896 pixels, while high detection accuracies (mAP > 0.96) could already be achieved using lower resolutions. The results indicate that ChickenNet provides a sufficient basis for automated monitoring of plumage conditions in commercial laying hen farms.
keywords: طیور | ارزیابی پر و بال | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی نمونه | Poultry | Plumage assessment | Computer vision | Deep learning | Instance segmentation
مقاله انگلیسی
6 Semantic Riverscapes: Perception and evaluation of linear landscapes from oblique imagery using computer vision
مناظر معنایی رودخانه: درک و ارزیابی مناظر خطی از تصاویر مایل با استفاده از بینایی کامپیوتری-2022
Traditional approaches for visual perception and evaluation of river landscapes adopt on-site surveys or as- sessments through photographs. The former is expensive, hindering large-scale analyses, and it is conducted only on street-level or top-down imagery. The latter only reflects the subjective perception and also entails a laborious process. Addressing these challenges, this study proposes an alternative: a novel workflow for visual analysis of urban river landscapes by combining unmanned aerial vehicle (UAV) oblique photography with computer vision (CV) and virtual reality (VR). The approach is demonstrated with an experiment on a section of the Grand Canal in China where UAV oblique panoramic imagery has been processed using semantic segmentation for visual evaluation with an index system we designed. Concurrent surveys, immersive and non-immersive VR, are used to evaluate these photos, with a total of 111 participants expressing their perceptions across multiple dimensions. Then, the relationship between the people’s subjective visual perception and the river landscape environment as seen by computers has been established. The results suggest that using this approach, rivers and surrounding landscapes can be analyzed automatically and efficiently, and the mean pixel accuracy (MPA) of the developed model is 90%, which advances state of the art. The results of this study can benefit urban planners in formulating riverside development policies, analyzing the perception of plans for a future scenario before an area is rede- veloped, and the method can also aid relevant parties in having a macro understanding of the overall situation of the river as a basis for follow-up research. Due to simplicity, accuracy and effectiveness, this workflow is transferable and cost-effective for large-scale investigations of riverscapes and linear heritage. We openly release Semantic Riverscapes—the dataset we collected and processed, bridging another gap in the field.
keywords: ریورساید | باز کردن داده ها | GeoAI | بررسی های هوایی | هواپیماهای بدون سرنشین | واقعیت مجازی | Riverside | Open data | GeoAI | Aerial surveys | Drones | Virtual reality
مقاله انگلیسی
7 Assessing surface drainage conditions at the street and neighborhood scale: A computer vision and flow direction method applied to lidar data
ارزیابی شرایط زهکشی سطحی در مقیاس خیابان و محله: یک روش دید کامپیوتری و جهت جریان اعمال شده به داده های لیدار-2022
Surface drainage at the neighborhood and street scales plays an important role in conveying stormwater and mitigating urban flooding. Surface drainage at the local scale is often ignored due to the lack of up-to-date fine- scale topographical information. This paper addresses this issue by providing a novel method for evaluating surface drainage at the neighborhood and street scales based on mobile lidar (light detection and ranging) measurements. The developed method derives topographical properties and runoff accumulation by applying a semantic segmentation (SS) model (a computer vision technique) and a flow direction model (a hydrology technique) to lidar data. Fifty lidar images representing 50 street blocks were used to train, validate, and test the SS model. Based on the test dataset, the SS model has 80.3% IoU and 88.5% accuracy. The results suggest that the proposed method can effectively evaluate surface drainage conditions at both the neighborhood and street scales and identify problematic low points that could be susceptible to water ponding. Municipalities and property owners can use this information to take targeted corrective maintenance actions.
keywords: تقسیم بندی معنایی | جهت جریان | لیدار موبایل | زهکشی سطحی | زیرساخت های زهکشی | Semantic segmentation | Flow direction | Mobile lidar | Surface drainage | Drainage infrastructure
مقاله انگلیسی
8 Power to the people: Applying citizen science and computer vision to home mapping for rural energy access
قدرت به مردم: به کارگیری علم شهروندی و بینش رایانه در نقشه‌برداری خانه برای دسترسی به انرژی روستایی-2022
To implement effective rural electricity access systems, it is fundamental to identify where potential consumers live. Here, we test the suitability of citizen science paired with satellite imagery and computer vision to map remote off-grid homes for electrical system design. A citizen science project called “Power to the People” was completed on the Zooniverse platform to collect home annotations in Uganda, Kenya, and Sierra Leone. Thou- sands of citizen scientists created a novel dataset of 578,010 home annotations with an average mapping speed of 7 km2/day. These data were post-processed with clustering to determine high-consensus home annotations. The raw annotations achieved a recall of 93% and precision of 49%; clustering the annotations increased precision to 69%. These were used to train a Faster R-CNN object detection model, producing detections useful as a first pass for home-level mapping with a feasible mapping rate of 42,938 km2/day. Detections achieved a precision of 67% and recall of 36%. This research shows citizen science and computer vision to be a promising pipeline for accelerated rural home-level mapping to enable energy system design.
keywords: دانش شهروندی | بینایی کامپیوتر | دسترسی به برق | نقشه برداری روستایی | تصویربرداری ماهواره ای | سنجش از دور | Citizen science | Computer vision | Electricity access | Rural mapping | Satellite imagery | Remote sensing
مقاله انگلیسی
9 Disintegration testing augmented by computer Vision technology
آزمایش تجزیه با فناوری Vision کامپیوتری تقویت شده است-2022
Oral solid dosage forms, specifically immediate release tablets, are prevalent in the pharmaceutical industry. Disintegration testing is often the first step of commercialization and large-scale production of these dosage forms. Current disintegration testing in the pharmaceutical industry, according to United States Pharmacopeia (USP) chapter 〈701〉, only gives information about the duration of the tablet disintegration process. This infor- mation is subjective, variable, and prone to human error due to manual or physical data collection methods via the human eye or contact disks. To lessen the data integrity risk associated with this process, efforts have been made to automate the analysis of the disintegration process using digital lens and other imaging technologies. This would provide a non-invasive method to quantitatively determine disintegration time through computer algorithms. The main challenges associated with developing such a system involve visualization of tablet pieces through cloudy and turbid liquid. The Computer Vision for Disintegration (CVD) system has been developed to be used along with traditional pharmaceutical disintegration testing devices to monitor tablet pieces and distinguish them from the surrounding liquid. The software written for CVD utilizes data captured by cameras or other lenses then uses mobile SSD and CNN, with an OpenCV and FRCNN machine learning model, to analyze and interpret the data. This technology is capable of consistently identifying tablets with ≥ 99.6% accuracy. Not only is the data produced by CVD more reliable, but it opens the possibility of a deeper understanding of disintegration rates and mechanisms in addition to duration.
keywords: از هم پاشیدگی | اشکال خوراکی جامد | تست تجزیه | یادگیری ماشین | شبکه های عصبی | Disintegration | Oral Solid Dosage Forms | Disintegration Test | Machine Learning | Neural Networks
مقاله انگلیسی
10 Computer vision-based illumination-robust and multi-point simultaneous structural displacement measuring method
روش اندازه گیری جابجایی ساختاری همزمان با روشنایی مبتنی بر بینایی کامپیوتری-2022
Computer vision-based techniques for structural displacement measurement are rapidly becoming popular in civil structural engineering. However, most existing computer vision-based displace- ment measurement methods require man-made targets for object matching or tracking, besides usually the measurement accuracies are seriously sensitive to the ambient illumination variations. A computer vision-based illumination robust and multi-point simultaneous measuring method is proposed for structural displacement measurements. The method consists of two part, one is for segmenting the beam body from its background, the segmentation is perfectly carried out by fully convolutional network (FCN) and conditional random field (CRF); another is digital image cor- relation (DIC)-based displacement measurement. A simply supported beam is built in laboratory. The accuracy and illumination robustness are verified through three groups of elaborately designed experiments. Due to the exploitation of FCN and CRF for pixel-wise segmentation, numbers of locations along with the segmented beam body can be chosen and measured simul- taneously. It is verified that the method is illumination robust since the displacement measure- ments are with the smallest fluctuations to the illumination variations. The proposed method does not require any man-made targets attached on the structure, but because of the exploitation of DIC in displacement measurement, the regions centered on the measuring points need to have texture feature.
keywords: پایش سلامت سازه | اندازه گیری جابجایی | بینایی کامپیوتر | یادگیری عمیق | تقسیم بندی شی | همبستگی تصویر دیجیتال | Structural health monitoring | Displacement measurement | Computer vision | Deep learning | Object segmentation | Digital image correlation
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 5281 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 5281 :::::::: افراد آنلاین: 74