Transform domain representation-driven convolutional neural networks for skin lesion segmentation
انتقال شبکه های عصبی کانولوشن نمایندگی محور دامنه برای تقسیم بندی ضایعه پوستی-2020
Automated diagnosis systems provide a huge improvement in early detection of skin cancer, and con- sequently, contribute to successful treatment. Recent research on convolutional neural network has achieved enormous success in segmentation and object detection tasks. However, these networks require large amount of data that is a big challenge in medical domain where often have insufficient data and even a pretrained model on medical images can be hardly found. Lesion segmentation as the initial step of skin cancer analysis remains a challenging issue since datasets are small and include a variety of im- ages in terms of light, color, scale, and marks which have led researchers to use extensive augmentation and preprocessing techniques or fine tuning the network with a pretrained model on irrelevant images. A segmentation model based on convolutional neural networks is proposed in this study for the tasks of skin lesion segmentation and dermoscopic feature segmentation. The network is trained from scratch and despite the small size of datasets neither excessive data augmentation nor any preprocessing to remove artifacts or enhance the images are applied. Alternatively, we investigated incorporating image represen- tations of the transform domain to the convolutional neural network and compared to a model with more convolutional layers that resulted in 6% higher Jaccard index and has shorter training time. The model improved by applying CIELAB color space and the performance of the final proposed architecture is evaluated on publicly available datasets from ISBI challenges in 2016 and 2017. The proposed model has resulted in an improvement of as much as 7% for the segmentation metrics and 17% for the fea- ture segmentation, which demonstrates the robustness of this unique hybrid framework and its future applications as well as further improvement.
Keywords: Convolutional neural network | Dermoscopic features | Melanoma | Skin lesion segmentation | Transform domain
کمترین از دست دادن حاشیه برای تشخیص چهره عمیق
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 24
تشخیص چهره موفقیت بزرگی به دست آورده است که دلیل اصلی آن توسعه سریع شبکه های عصبی عمیق (DNN) در سال های اخیر است. کارکردهای مختلف ازدست دادن (اتلاف) در یک شبکه عصبی عمیق قابل استفاده است که منجر به عملکرد متفاوتی می شود. اخیراً برخی از کارکردهای تلفات پیشنهاد داده شده است. با این حال، آن ها نمی توانند مساله جهت گیری حاشیه ای را که در مجموعه داده های غیر متعادل وجود دارد حل کنند. در این مقاله حل مساله تمایل حاشیه ای را با تعیین یک حاشیه حداقلی برای تمامی زوج کلاس ها پیشنهاد می دهیم. ما تابع اتلاف جدیدی به نام حداقل اتلاف حاشیه ای (MML) پیشنهاد می دهیم که هدف آن گسترش محدوده آن هایی است که به زوج های مرکزی دسته بیش از حد نزدیک می شوند تا قابلیت متمایز کننده ویژگی های عمیق را ارتقاء دهد. تابع MML همراه با توابع Softmax Loss و Centre Loss بر فرآیند آموزش نظارت می کنند تا حاشیه های تمامی دسته ها را صرف نظر از توزیع دسته آن ها مورد نظارت قرار دهند. ما تابع MML را در پلتفورم Inception-ResNet-v1 پیاده سازی می کنیم و آزمایش های گسترده ای را بر روی هفت مجموعه داده تشخیص چهره انجام می دهیم که شامل MegaFace، FaceScrub، LFW، SLLFW، YTF، IJB-B و IJB-C است. نتایج تجربی نشان می دهد که تابع از دست دادن MML پیشنهادی منجر به حالت جدیدی در تشخیص چهره می شود و اثر منفی جهت گیری حاشیه ای را کاهش می دهد.
کلید واژه ها :یادگیری عمیق | شبکه های عصبی باز رخدادگر (CNN) | تشخیص چهره| کمترین از دست دادن حاشیه ای (MML)
|مقاله ترجمه شده|
Automatic detection, localization and segmentation of nano-particles with deep learning in microscopy images
تشخیص خودکار ، بومی سازی و تقسیم نانو ذرات با یادگیری عمیق در تصاویر میکروسکوپی-2019
With the growing amount of high resolution microscopy images automatic nano-particle detection, shape analysis and size determination have gained importance for providing quantitative support that gives important information for the evaluation of the material. In this paper, we present a new method for detection of nanoparticles and determination of their shapes and sizes simultaneously with deep learning. The proposed method employs multiple output convolutional neural networks (MO-CNN) and has two outputs: first is the detection output that gives the locations of the particles and the other one is the segmentation output for providing the boundaries of the nano-particles. The final sizes of particles are determined with the modified Hough algorithm that runs on the segmentation output. The proposed method is tested and evaluated on a dataset containing 17 TEM images of Fe3O4 and silica coated nano-particles. Also, we compared these results with U-net algorithm which is a popular deep learning method. The experiments showed that the proposed method has 98.23% accuracy for detection and 96.59% accuracy for segmentation of nano-particles.
Keywords: Nano-particle | Deep learning | Object detection | MO-CNN | Hough transform
A survey on deep learning based face recognition
مروری بر شناخت چهره مبتنی بر یادگیری عمیق-2019
Deep learning, in particular the deep convolutional neural networks, has received increasing interests in face recognition recently, and a number of deep learning methods have been proposed. This paper summarizes about 330 contributions in this area. It reviews major deep learning concepts pertinent to face image analysis and face recognition, and provides a concise overview of studies on specific face recognition problems, such as handling variations in pose, age, illumination, expression, and heterogeneous face matching. A summary of databases used for deep face recognition is given as well. Finally, some open challenges and directions are discussed for future research.
Keywords: Deep learning | Face recognition | Artificial Neural Network | Convolutional Neural Networks | Autoencoder | Generative Adversarial Networks
Deep learning-assisted literature mining for in vitro radiosensitivity data
استخراج ادبیات با کمک یادگیری عمیق برای داده های تابش آزمایشگاهی -2019
Background and purpose: Integrated analysis of existing radiosensitivity data obtained by the goldstandard clonogenic assay has the potential to improve our understanding of cancer cell radioresistance. However, extraction of radiosensitivity data from the literature is highly labor-intensive. To aid in this task, using deep convolutional neural networks (CNNs) and other computer technologies, we developed an analysis pipeline that extracts radiosensitivity data derived from clonogenic assays from the literature. Materials and methods: Three classifiers (C1–3) were developed to identify publications containing radiosensitivity data derived from clonogenic assays. C1 uses Faster Regions CNN with Inception Resnet v2 (fRCNN-IRv2), VGG-16, and Optical Character Recognition (OCR) to identify publications that contain semi-logarithmic graphs showing radiosensitivity data derived from clonogenic assays. C2 uses fRCNN-IRv2 and OCR to identify publications that contain bar graphs showing radiosensitivity data derived from clonogenic assays. C3 is a program that identifies publications containing keywords related to radiosensitivity data derived from clonogenic assays. A program (iSF2) was developed using Mask RCNN and OCR to extract surviving fraction after 2-Gy irradiation (SF2) as assessed by clonogenic assays, presented in semi-logarithmic graphs. The efficacy of C1–3 and iSF2 was tested using seven datasets (1805 and 222 publications in total, respectively). Results: C1–3 yielded sensitivity of 91.2% ± 3.4% and specificity of 90.7% ± 3.6%. iSF2 returned SF2 values that were within 2.9% ± 2.6% of the SF2 values determined by radiation oncologists. Conclusion: Our analysis pipeline is potentially useful to acquire radiosensitivity data derived from clonogenic assays from the literature.
Keywords: Clonogenic assays | Radiosensitivity | Deep learning | Convolutional neural networks | Radiation oncology
Establishing structure-property localization linkages for elastic deformation of three-dimensional high contrast composites using deep learning approaches
ایجاد پیوندهای محلی سازی ساختار و خاصیت برای تغییر شکل الاستیک کامپوزیت های کنتراست بالا سه بعدی با استفاده از روشهای یادگیری عمیق-2019
Data-driven methods are attracting growing attention in the field of materials science. In particular, it is now becoming clear that machine learning approaches offer a unique avenue for successfully mining practically useful process-structure-property (PSP) linkages from a variety of materials data. Most previous efforts in this direction have relied on feature design (i.e., the identification of the salient features of the material microstructure to be included in the PSP linkages). However due to the rich complexity of features in most heterogeneous materials systems, it has been difficult to identify a set of consistent features that are transferable from one material system to another. With flexible architecture and remarkable learning capability, the emergent deep learning approaches offer a new path forward that circumvents the feature design step. In this work, we demonstrate the implementation of a deep learning feature-engineering-free approach to the prediction of the microscale elastic strain field in a given threedimensional voxel-based microstructure of a high-contrast two-phase composite. The results show that deep learning approaches can implicitly learn salient information about local neighborhood details, and significantly outperform state-of-the-art methods.
Keywords: Materials informatics | Convolutional neural networks | Deep learning | Localization | Structure-property linkages
Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images
تقسیم بندی دقیق و قوی بر اساس یادگیری عمیق از حجم هدف بالینی پروستات در تصاویر سونوگرافی-2019
The goal of this work was to develop a method for accurate and robust automatic segmentation of the prostate clinical target volume in transrectal ultrasound (TRUS) images for brachytherapy. These images can be difficult to segment because of weak or insufficient landmarks or strong artifacts. We devise a method, based on convolutional neural networks (CNNs), that produces accurate segmentations on easy and difficult images alike. We propose two strategies to achieve improved segmentation accuracy on diffi- cult images. First, for CNN training we adopt an adaptive sampling strategy, whereby the training process is encouraged to pay more attention to images that are difficult to segment. Secondly, we train a CNN ensemble and use the disagreement among this ensemble to identify uncertain segmentations and to estimate a segmentation uncertainty map. We improve uncertain segmentations by utilizing the prior shape information in the form of a statistical shape model. Our method achieves Hausdorffdistance of 2.7 ±2.3 mm and Dice score of 93.9 ±3.5%. Comparisons with several competing methods show that our method achieves significantly better results and reduces the likelihood of committing large segmentation errors. Furthermore, our experiments show that our approach to estimating segmentation uncertainty is better than or on par with recent methods for estimation of prediction uncertainty in deep learning mod- els. Our study demonstrates that estimation of model uncertainty and use of prior shape information can significantly improve the performance of CNN-based medical image segmentation methods, especially on difficult images.
Keywords: Image segmentation | Model uncertainty | Shape models | Clustering | Deep learning
Weld image deep learning-based on-line defects detection using convolutional neural networks for Al alloy in robotic arc welding
تشخیص نقص تصویر جوش تشخیص عمیق مبتنی بر یادگیری عمیق برخط با استفاده از شبکه های عصبی همگرا برای آلیاژ آل در جوش قوس رباتیک-2019
Accurate on-line weld defects detection is still challenging for robotic welding manufacturing due to the complexity of weld defects. This paper studied deep learning–based on-line defects detection for aluminum alloy in robotic arc welding using Convolutional Neural Networks (CNN) and weld images. Firstly, an image acquisition system was developed to simultaneously collect weld images, which can provide more information of the real-time weld images from different angles including top front, top back and back seam. Then, a new CNN classification model with 11 layers based on weld image was designed to identify weld penetration defects. In order to improve the robustness and generalization ability of the CNN model, weld images from different welding current and feeding speed were captured for the CNN model. Based on the actual industry challenges such as the instability of welding arc, the complexity of the welding environment and the random changing of plate gap condition, two kinds of data augmentation including noise adding and image rotation were used to boost the CNN dataset while parameters optimization was carried out. Finally, non-zero pixel method was proposed to quantitatively evaluate and visualize the deep learning features. Furthermore, their physical meaning were clearly explained. Instead of decreasing the interference from arc light as in traditional way, the CNN model has taken full use of those arc lights by combining them in a various way to form the complementary features. Test results shows that the CNN model has better performance than our previous work with the mean classification accuracy of 99.38%. This paper can provide some guidance for on-line detection of manufacturing quality in metal additive manufacturing (AM) and laser welding.
Keywords: Deep learning | Defects detection | Al alloy | Robotic arc welding | Convolutional neural networks | Weld images | Feature visualization
State-of-the-Art Deep Learning in Cardiovascular Image Analysis
یادگیری عمیق و پیشرفته در تجزیه و تحلیل تصویر قلب و عروق-2019
Cardiovascular imaging is going to change substantially in the next decade, fueled by the deep learning revolution. For medical professionals, it is important to keep track of these developments to ensure that deep learning can have meaningful impact on clinical practice. This review aims to be a stepping stone in this process. The general concepts underlying most successful deep learning algorithms are explained, and an overview of the state-of-the-art deep learning in cardiovascular imaging is provided. This review discusses >80 papers, covering modalities ranging from cardiac magnetic resonance, computed tomography, and single-photon emission computed tomography, to intravascular optical coherence tomography and echocardiography. Many different machines learning algorithms were used throughout these papers, with the most common being convolutional neural networks. Recent algorithms such as generative adversarial models were also used. The potential implications of deep learning algorithms on clinical practice, now and in the near future, are discussed. (J Am Coll Cardiol Img 2019;12:1549–65) © 2019 by the American College of Cardiology Foundation.
Deep learning in medical image analysis: A third eye for doctors
یادگیری عمیق در تجزیه و تحلیل تصویر پزشکی: چشم سوم برای پزشکان-2019
Aim and scope: Artificial intelligence (AI) in medicine is a fast-growing field. The rise of deep learning algorithms, such as convolutional neural networks (CNNs), offers fascinating perspectives for the automation of medical image analysis. In this systematic review article, we screened the current literature and investigated the following question: ‘‘Can deep learning algorithms for image recognition improve visual diagnosis in medicine?’’ Materials and methods: We provide a systematic review of the articles using CNNs for medical image analysis, published in the medical literature before May 2019. Articles were screened based on the following items: type of image analysis approach (detection or classification), algorithm architecture, dataset used, training phase, test, comparison method (with specialists or other), results (accuracy, sensibility and specificity) and conclusion. Results: We identified 352 articles in the PubMed database and excluded 327 items for which performance was not assessed (review articles) or for which tasks other than detection or classification, such as segmentation, were assessed. The 25 included papers were published from 2013 to 2019 and were related to a vast array of medical specialties. Authors were mostly from North America and Asia. Large amounts of qualitative medical images were necessary to train the CNNs, often resulting from international collaboration. The most common CNNs such as AlexNet and GoogleNet, designed for the analysis of natural images, proved their applicability to medical images. Conclusion: CNNs are not replacement solutions for medical doctors, but will contribute to optimize routine tasks and thus have a potential positive impact on our practice. Specialties with a strong visual component such as radiology and pathology will be deeply transformed. Medical practitioners, including surgeons, have a key role to play in the development and implementation of such devices.
Keywords: Deep learning | Artificial intelligence | Neural network | Image analysis | Systematic review | Computer vision