با سلام خدمت کاربران عزیز، به اطلاع می رساند ترجمه مقالاتی که سال انتشار آن ها زیر 2008 می باشد رایگان بوده و میتوانید با وارد شدن در صفحه جزییات مقاله به رایگان ترجمه را دانلود نمایید.
A dynamic classification unit for online segmentation of big data via small data buffers
واحد طبقه بندی پویا برای تقسیم آنلاین داده های بزرگ از طریق بافر داده های کوچک-2020
In many segmentation processes, we assign new cases according to a model that was built on the basis of past cases. As long as the new cases are “similar enough” to the past cases, segmentation proceeds normally. However, when a new case is substantially different from the known cases, a reexamination of the previously created segments is required. The reexamination may result in the creation of new segments or in the updating of the existing ones. In this paper, we assume that in big and dynamic data environments it is not possible to reexamine all past data and, therefore, we suggest using small groups of selected cases, stored in small data buffers, as an alternative to the collection of all past data. We present an incremental dynamic classifier that supports real-time unsupervised segmentation in big and dynamic data environments. In order to reduce the computational effort of unsupervised clustering in such environments, the suggested model performs calculations only on the relevant data buffers that store the relevant representative cases. In addition, the suggested model can serve as a dynamic classification unit (DCU) that can act as an autonomous agent, as well as collaborate with other DCUs. The evaluation is presented by comparing three approaches: static, dynamic, and incremental dynamic.
Keywords: Incremental dynamic classifier | Dynamic segmentation | Incremental data analysis | Cluster analysis | Classification | Big data
MapReduce based tipping point scheduler for parallel image processing
مانبندی نقطه اوج بر اساس MapReduce برای پردازش تصویر موازی-2020
Nowadays, Big Data image processing is very much in need due to its proven success in the field of business information system, medical science and social media. However, as the days are passing by, the computation of Big Data images is becoming more complex which ultimately results in complex resource management and higher task execution time. Researchers have been using a combination of CPU and GPU based computing to cut down the execution time, however, when it comes to scaling of compute nodes, then the combination of CPU and GPU based computing still remains a challenge due to the high commu- nication cost factor. In order to tackle this issue, the Map-Reduce framework has come out to be a viable option as its workflow optimization could be enhanced by changing its underlying job scheduling mech- anism. This paper presents a comparative study of job scheduling algorithms which could be deployed over various Big Data based image processing application and also proposes a tipping point scheduling algorithm to optimize the workflow for job execution on multiple nodes. The evaluation of the proposed scheduling algorithm is done by implementing parallel image segmentation algorithm to detect lung tu- mor for up to 3GB size of image dataset. In terms of performance comprising of task execution time and throughput, the proposed tipping point scheduler has come out to be the best scheduler followed by the Map-Reduce based Fair scheduler. The proposed tipping point scheduler is 1.14 times better than Map- Reduce based Fair scheduler and 1.33 times better than Map-Reduced based FIFO scheduler in terms of task execution time and throughput. In terms of speedup comparison between single node and multiple nodes, the proposed tipping point scheduler attained a speedup of 4.5 X for multi-node architecture.
Keywords: Job scheduler | Workflow optimization | Map-Reduce | Tipping point scheduler | Parallel image segmentation | Lung tumor
به سوی تقسیم بندی شبکه 5G برای شبکه های ادهاک خودرویی: یک رویکرد انتها به انتها
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 7 - تعداد صفحات فایل doc فارسی: 16
شبکه های 5G نه تنها از افزایش نرخ داده ها پشتیبانی می کنند، بکه همچنین می بایست زیرساخت مشترکی را فراهم کنند که براساس آن سرویس های جدید همراه با نیازمندی های بسیار متفاوت کیفیت سرویس (QoS) شبکه با تاخیر کمتر ارائه شود. به طور دقیق تر، کاربردهای شبکه های خودرویی چند منظوره (VANET) که اساساً گرایش آن ها به مسائل ایمنی و سرگرمی است (مانند پخش ویدیویی و مرورگر وب) در حال افزایش است. بیشتر این کاربردها دارای محدودیت های جدی از نظر تاخیر در حد چند میلی ثانیه هستند و نیاز به اطمینان پذیری بالایی دارند. پلتفورم نسل پنجم برای بررسی چنین نیازهایی نیازمند ایجاد شبکه های مجازی برنامه پذیر و راهکارهای مختلف ترافیکی همانند تقسیم بندی (برش) شبکه است. به این منظور در این مقاله یک مکانیزم تقسیم بندی پویا و برنامه پذیر انتها به انتها در شبکه LTE مبتنی بر M-CORD پیشنهاد می دهیم. یکی از ویژگی های کلیدی M-CORD که مکانیزم پیشنهاد تقسیم بندی شبکه از آن استفاده می کند، EPC مجازی است که سفارشی سازی و اصلاح را امکان پذیر می سازد. M-CORD کارکرد ضروری را برای برنامه ریزی تعاریف تقسیم بندی فراهم می کند که در آن مکانیزم پیشنهادی به طور کامل از رویکرد تعریف شده نرم افزاری خود پیروی می کند. علاوه بر این، ما نشان می دهیم که چگونه دستگاه ها انتهایی قرار گرفته در بخش های مختلف براساس QoS های متفاوت براساس نوع کاربر انتهایی تخصیص داده می شوند. این نتایج نشان می دهند که مکانیزم پیشنهادی تقسیم بندی شبکه بخش های مناسب را انتخاب می کند و منابع را به کاربران براساس نیازها و نوع سرویس آن ها اختصاص می دهد.
کلمات کلیدی: تقسیم بندی شبکه | نسل پنجم (5G) | M-CORD | LTE | NSSF | VANET
|مقاله ترجمه شده|
Transform domain representation-driven convolutional neural networks for skin lesion segmentation
انتقال شبکه های عصبی کانولوشن نمایندگی محور دامنه برای تقسیم بندی ضایعه پوستی-2020
Automated diagnosis systems provide a huge improvement in early detection of skin cancer, and con- sequently, contribute to successful treatment. Recent research on convolutional neural network has achieved enormous success in segmentation and object detection tasks. However, these networks require large amount of data that is a big challenge in medical domain where often have insufficient data and even a pretrained model on medical images can be hardly found. Lesion segmentation as the initial step of skin cancer analysis remains a challenging issue since datasets are small and include a variety of im- ages in terms of light, color, scale, and marks which have led researchers to use extensive augmentation and preprocessing techniques or fine tuning the network with a pretrained model on irrelevant images. A segmentation model based on convolutional neural networks is proposed in this study for the tasks of skin lesion segmentation and dermoscopic feature segmentation. The network is trained from scratch and despite the small size of datasets neither excessive data augmentation nor any preprocessing to remove artifacts or enhance the images are applied. Alternatively, we investigated incorporating image represen- tations of the transform domain to the convolutional neural network and compared to a model with more convolutional layers that resulted in 6% higher Jaccard index and has shorter training time. The model improved by applying CIELAB color space and the performance of the final proposed architecture is evaluated on publicly available datasets from ISBI challenges in 2016 and 2017. The proposed model has resulted in an improvement of as much as 7% for the segmentation metrics and 17% for the fea- ture segmentation, which demonstrates the robustness of this unique hybrid framework and its future applications as well as further improvement.
Keywords: Convolutional neural network | Dermoscopic features | Melanoma | Skin lesion segmentation | Transform domain
Harmonization of large MRI datasets for the analysis of brain imaging patterns throughout the lifespan
هماهنگ سازی مجموعه داده های بزرگ MRI برای تجزیه و تحلیل الگوهای تصویربرداری از مغز در طول عمر-2020
As medical imaging enters its information era and presents rapidly increasing needs for big data analytics, robust pooling and harmonization of imaging data across diverse cohorts with varying acquisition protocols have become critical. We describe a comprehensive effort that merges and harmonizes a large-scale dataset of 10,477 structural brain MRI scans from participants without a known neurological or psychiatric disorder from 18 different studies that represent geographic diversity. We use this dataset and multi-atlas-based image processing methods to obtain a hierarchical partition of the brain from larger anatomical regions to individual cortical and deep structures and derive age trends of brain structure through the lifespan (3–96 years old). Critically, we present and validate a methodology for harmonizing this pooled dataset in the presence of nonlinear age trends. We provide a web-based visualization interface to generate and present the resulting age trends, enabling future studies of brain structure to compare their data with this reference of brain development and aging, and to examine deviations from ranges, potentially related to disease.
Keywords: MRI | Segmentation | FreeSurfer | MUSE | Brain | ROI
Conflict management in the fusion of complementary segmentations of deformed kidneys and nephroblastoma
مدیریت تعارض در همجوشی بخش های مکمل کلیه های ناقص شده و نفروبلاستوما-2020
The fusion of multiple segmentations aims to improve their accuracy in order to make them exploitable. However, conflicts may appear. In this paper, two conflict-management models are proposed for the fu- sion of complementary segmentations. This conflict-management and fusion procedure, integrated into the SAIAD project, carries out the fusion of deformed kidneys and nephroblastoma using the combination of six independent methods. These methods are based on different criteria, like the adjacent segmented slices, the variation of information, the Dice, the neighbouring labels, the pixel intensity by scanner im- ages, and the fully connected CRFs. The performances of our fusion models was evaluated on 139 scans for three patients with nephroblastoma, and the results demonstrate its effectiveness and the improve- ment of the resulting segmentations.
Keywords: Fusion | Conflict management | Segmentation | Cancer tumour
Temporal and spatial deep learning network for infrared thermal defect detection
شبکه یادگیری عمیق زمانی و مکانی برای تشخیص نقص حرارتی مادون قرمز-2019
Most common types of defects for composite are debond and delamination. It is difficult to detect the inner defects on a complex shaped specimen by using conventional optical thermography nondestructive testing (NDT) methods. In this paper, a hybrid of spatial and temporal deep learning architecture for automatic thermography defects detection is proposed. The integration of cross network learning strategy has the capability to significantly minimize the uneven illumination and enhance the detection rate. The probability of detection (POD) has been derived to measure the detection results and this is coupled with comparison studies to verify the efficacy of the proposed method. The results show that visual geometry group-Unet (VGG-Unet) cross learning structure can significantly improve the contrast between the defective and non-defective regions. In addition, investigation of different feature extraction methods in which embedded in deep learning is conducted to optimize the learning structure. To investigate the efficacy and robustness of the proposed method, experimental studies have been carried out for inner debond defects on both regular and irregular shaped carbon fiber reinforced polymer (CFRP) specimens.
Keywords: Deep learning | Segmentation | Thermography defect detection | Nondestructive testing
A systematic survey of computer-aided diagnosis in medicine: Past and present developments
مرور سیستماتیک تشخیص کمک به رایانه در پزشکی: تحولات گذشته و حال-2019
Computer-aided diagnosis (CAD) in medicine is the result of a large amount of effort expended in the interface of medicine and computer science. As some CAD systems in medicine try to emulate the diag- nostic decision-making process of medical experts, they can be considered as expert systems in medicine. Furthermore, CAD systems in medicine may process clinical data that can be complex and/or massive in size. They do so in order to infer new knowledge from data and use that knowledge to improve their diagnostic performance over time. Therefore, such systems can also be viewed as intelligent systems be- cause they use a feedback mechanism to improve their performance over time. The main aim of the literature survey described in this paper is to provide a comprehensive overview of past and current CAD developments. This survey/review can be of significant value to researchers and professionals in medicine and computer science. There are already some reviews about specific aspects of CAD in medicine. How- ever, this paper focuses on the entire spectrum of the capabilities of CAD systems in medicine. It also identifies the key developments that have led to today’s state-of-the-art in this area. It presents an ex- tensive and systematic literature review of CAD in medicine, based on 251 carefully selected publica- tions. While medicine and computer science have advanced dramatically in recent years, each area has also become profoundly more complex. This paper advocates that in order to further develop and im- prove CAD, it is required to have well-coordinated work among researchers and professionals in these two constituent fields. Finally, this survey helps to highlight areas where there are opportunities to make significant new contributions. This may profoundly impact future research in medicine and in select areas of computer science.
Keywords: Computer-aided diagnosis | Computer-aided detection | Expert and intelligent systems | Computerized signal analysis | Segmentation | Classification
Estimation of the degree of hydration of concrete through automated machine learning based microstructure analysis – A study on effect of image magnification
Estimation of the degree of hydration of concrete through automated machine learning based microstructure analysis – A study on effect of image magnification-2019
The scanning electron microscopy (SEM) images are commonly used to understand the microstructure of the concrete. With the advancements in the field of computer vision, many researchers have adopted the image processing technique for the microstructure analysis. Most of the previous methods are not adaptable, nonreproducible, semi-automated, and most importantly all these methods are highly influenced by image magnification. Therefore, to overcome these challenges, this paper presents a machine learning based image segmentation method for microstructure analysis and degree of hydration measurement using SEM images. In addition, the authors looked into the impact of magnification of SEM images on the model accuracy and classifier training for the degree of hydration measurement considering two scenarios. First, the image segmentation was performed using a classifier of specific magnification, and then a common classifier is trained using the image of different magnification. The results show that the Random Forest classifier algorithm is suitable for microstructure analysis using SEM images. Through the statistical analysis, it has been proved that there is no significant effect of magnification on model training and accuracy for the degree of hydration measurement. So, a single classifier can be used to process the images of different magnification of a specimen which reduces the effort of training and computational time. The proposed method can generate highly accurate and reliable results in a shorter time and lower cost. Moreover, the findings in this research can be useful for researchers to determine the optimum magnification required for the microstructure analysis.
Keywords: Concrete microstructure analysis | Degree of hydration | Machine learning | Image segmentation
Development of accurate human head models for personalized electromagnetic dosimetry using deep learning
توسعه مدل های دقیق سر انسان برای دوزیمتری الکترومغناطیسی شخصی با استفاده از یادگیری عمیق-2019
The development of personalized human head models from medical images has become an important topic in the electromagnetic dosimetry field, including the optimization of electrostimulation, safety assessments, etc. Human head models are commonly generated via the segmentation of magnetic resonance images into different anatomical tissues. This process is time consuming and requires special experience for segmenting a relatively large number of tissues. Thus, it is challenging to accurately compute the electric field in different specific brain regions. Recently, deep learning has been applied for the segmentation of the human brain. However, most studies have focused on the segmentation of brain tissue only and little attention has been paid to other tissues, which are considerably important for electromagnetic dosimetry. In this study, we propose a new architecture for a convolutional neural network, named ForkNet, to perform the segmentation of whole human head structures, which is essential for evaluating the electrical field distribution in the brain. The proposed network can be used to generate personalized head models and applied for the evaluation of the electric field in the brain during transcranial magnetic stimulation. Our computational results indicate that the head models generated using the proposed network exhibit strong matching with those created via manual segmentation in an intra-scanner segmentation task.
Keywords: convolutional neural network | Deep learning | Image segmentation | Transcranial magnetic stimulation