تحلیل لبه ای مبتنی بر موجک چند جهته برای تشخیص سطح توسط پروفیلومتری نوری
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 18
دانشمندان، مهندسان و تولید کنندگان نیاز ضروری به تکنیک های بهتر تشخیص و کنترل کیفیت دارند. مترولوژی نوری با استفاده از علوم نور و علوم کامپیوتر به دنبال شبیه سازی، طراحی، محاسبات و بازرسی برای بسیاری از برنامه های کاربردی علمی و صنعتی مانند اپتیک، مکانیک، هواپیما، الکترونیک و … است. آنالیز الگوی fringe روشی برای انجام برخی عملیات در تصاویر نوری و به منظور دریافت نقشه فاز اینترفرومتری و سپس استخراج برخی اطلاعات مفید از آن است. در این مقاله، بهبود محرک الگوریتم دمدولاسیون fringe محلی ارائه شده است، که بر اساس موجک جدید چند جهته است. کارهای عددی و تجربی در مقایسه با سایر الگوریتم های استاندارد، سود جالبی را نشان می دهد. رویکرد ما به سرعت به عنوان فاز روش های بازیابی پرطرفدار اجرا می شود، اما با دقت قابل توجهی دمدولاسیون fringe های نویز را بهبود می دهد. همه این مسائل بدون هیچ پیش پردازش توسط فیلتر کردن مدل ها رخ می دهد.
کليدواژه ها: تصویربرداری نوری | علوم کامپیوتر | پردازش تصویر | موجک چند جهته | فاز بازیابی | طرح ریزی fringe .
|مقاله ترجمه شده|
TOP-GAN: Stain-free cancer cell classification using deep learning with a small training set
TOP-GAN: طبقه بندی سلول های سرطانی بدون لکه با استفاده از یادگیری عمیق با یک مجموعه آموزشی کوچک-2019
We propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cell lines acquired by quantitative phase imaging. The proposed method, called transferring of pre-trained generative adversarial network (TOP-GAN), is hybridization between transfer learning and generative ad- versarial networks (GANs). Healthy cells and cancer cells of different metastatic potential have been im- aged by low-coherence off-axis holography. After the acquisition, the optical path delay maps of the cells are extracted and directly used as inputs to the networks. In order to cope with the small number of clas- sified images, we use GANs to train a large number of unclassified images from another cell type (sperm cells). After this preliminary training, we change the last layers of the network and design automatic classifiers for the correct cell type (healthy/primary cancer/metastatic cancer) with 90–99% accuracies, although small training sets of down to several images are used. These results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. We believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. Furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set.
Keywords: Holography | Quantitative phase imaging | Deep learning | Machine learning algorithms | Image classification | Biological cells
Neurobiological divergence of the positive and negative schizophrenia subtypes identified upon a new factor-structure of psychopathology using non-negative factorization: An international machine-learning study
واگرایی عصبی از زیرگروه های اسکیزوفرنی مثبت و منفی مشخص شده بر یک ساختار جدید از روانشناسی با استفاده از فاکتورسازی غیر منفی: یک مطالعه بین المللی یادگیری ماشین-2019
Objective: Disentangling psychopathological heterogeneity in schizophrenia is challenging and previous results remain inconclusive. We employed advanced machine-learning to identify a stable and generalizable factorization of the “Positive and Negative Syndrome Scale (PANSS)”, and used it to identify psychopathological subtypes as well as their neurobiological differentiations. Methods: PANSS data from the Pharmacotherapy Monitoring and Outcome Survey cohort (1545 patients, 586 followed up after 1.35±0.70 years) were used for learning the factor-structure by an orthonormal projective non-negative factorization. An international sample, pooled from nine medical centers across Europe, USA, and Asia (490 patients), was used for validation. Patients were clustered into psychopathological subtypes based on the identified factor-structure, and the neurobiological divergence between the subtypes was assessed by classification analysis on functional MRI connectivity patterns. Results: A four-factor structure representing negative, positive, affective, and cognitive symptoms was identified as the most stable and generalizable representation of psychopathology. It showed higher internal consistency than the original PANSS subscales and previously proposed factor-models. Based on this representation, the positive-negative dichotomy was confirmed as the (only) robust psychopathological subtypes, and these subtypes were longitudinally stable in about 80% of the repeatedly assessed patients. Finally, the individual subtype could be predicted with good accuracy from functional connectivity profiles of the ventro-medial frontal cortex, temporoparietal junction, and precuneus. Conclusions: Machine-learning applied to multi-site data with cross-validation yielded a factorization generalizable across populations and medical systems. Together with subtyping and the demonstrated ability to predict subtype membership from neuroimaging data, this work further disentangles the heterogeneity in schizophrenia
Keywords: non-negative factorization | brain imaging | subtyping | machine learning | multivariate classification | schizophrenia
Combined machine learning and diffusion tensor imaging reveals altered anatomic fiber connectivity of the brain in primary open-angle glaucoma
یادگیری ماشین ترکیبی و تصویربرداری با تانسور انتشار ، ارتباط فیبر آناتومیک مغز را در گلوکوم زاویه باز اولیه تغییر داده است-2019
Parameters derived from diffusion tensor imaging (DTI) have been found to be significantly altered in the optic tracts, optic nerves, and optic radiations in patients with primary open-angle glaucoma (POAG). In this study, DTI-derived parameters were further constructed into fiber connectivity, and we investigated anatomical fiber connectivity changes within and beyond the visual pathway in POAG patients. DTI and T1-weighted magnetic resonance images were acquired in 18 POAG patients and 26 healthy controls (HC). White matter tracts based on the Brodmann atlases (BA) were constructed using the deterministic fiber tracking method. The mean fractional anisotropy (FA), fiber number (FN), and mean fiber length (FL) were measured and then evaluated using twosample t-tests between POAG and HC. The fiber connectivity between regions was taken as the features for classifying HC and POAG using a machine learning method known as naïve Bayesian classification. The mean FA decreased in connections between visual cortex BA17/BA18 and cortex BA23/BA25/BA35/BA36, while it increased in the connections between cortex BA3/BA7/BA9 and BA5/BA6/BA45/BA25 in POAG. Classification using fibers where a significant difference in FN had been identified produced better accuracy (ACC=0.89) than using FA or FL (ACC=0.77 and 0.75, respectively). The FN of individual fiber connections with higher accuracy and significant changes in POAG involved brain regions associated with vision (BA19), depression (BA10/BA46/ BA25), and memory (BA29). These findings strengthen the hypothesis that POAG involves changes in anatomical connectivity within and beyond the visual pathway. Classification using the machine learning method reveals that mean FN has the potential to be used as a biomarker for detecting white matter microstructure changes in POAG.
Keywords: Glaucoma | Anatomic white matter connectivity | Diffusion tensor imaging | Fiber tracking | Machine learning
A deep learning model for early prediction of Alzheimers disease dementia based on hippocampal magnetic resonance imaging data
یک مدل یادگیری عمیق برای پیش بینی اولیه زوال عقل بیماری آلزایمر بر اساس داده های تصویربرداری رزونانس مغناطیسی هیپوکامپ-2019
It is challenging at baseline to predict when and which individuals who meet criteria for mild cognitive impairment (MCI) will ultimately progress to Alzheimer’s disease (AD) dementia. Methods: A deep learning method is developed and validated based on magnetic resonance imaging scans of 2146 subjects (803 for training and 1343 for validation) to predict MCI subjects’ progression to AD dementia in a time-to-event analysis setting. Results: The deep-learning time-to-event model predicted individual subjects’ progression to AD dementia with a concordance index of 0.762 on 439 Alzheimer’s Disease Neuroimaging Initiative testing MCI subjects with follow-up duration from 6 to 78 months (quartiles: [24, 42, 54]) and a concordance index of 0.781 on 40 Australian Imaging Biomarkers and Lifestyle Study of Aging testing MCI subjects with follow-up duration from 18 to 54 months (quartiles: [18, 36, 54]). The predicted progression risk also clustered individual subjects into subgroups with significant differences in their progression time to AD dementia (P ,.0002). Improved performance for predicting progression to AD dementia (concordance index 5 0.864) was obtained when the deep learning–based progression risk was combined with baseline clinical measures. Discussion: Our method provides a cost effective and accurate means for prognosis and potentially to facilitate enrollment in clinical trials with individuals likely to progress within a specific temporal period.
Keywords: Deep learning | Hippocampus | Time-to-event analysis | Alzheimer’s disease
Radiological images and machine learning: Trends, perspectives, and prospects
تصاویر رادیولوژی و یادگیری ماشین: روند، دیدگاه ها، و چشم انداز-2019
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Keywords: Deep learning | Machine learning | Imaging modalities | Deep neural networ
Novel Machine Learning Identifies Brain Patterns Distinguishing Diagnostic Membership of Human Immunodeficiency Virus, Alcoholism, and Their Comorbidity of Individuals
Novel Machine Learning Identifies Brain Patterns Distinguishing Diagnostic Membership of Human Immunodeficiency Virus, Alcoholism, and Their Comorbidity of Individuals-2019
The incidence of alcohol use disorder (AUD) in human immunodeficiency virus (HIV) infection is twice that of the rest of the population. This study documents complex radiologically identified, neuroanatomical effects of AUD1HIV comorbidity by identifying structural brain systems that predicted diagnosis on an individual basis. Applying novel machine learning analysis to 549 participants (199 control subjects, 222 with AUD, 68 with HIV, 60 with AUD1HIV), 298 magnetic resonance imaging brain measurements were automatically reduced to small subsets per group. Significance of each diagnostic pattern was inferred from its accuracy in predicting diagnosis and performance on six cognitive measures. While all three diagnostic patterns predicted the learning and memory score, the AUD1HIV pattern was the largest and had the highest predication accuracy (78.1%). Providing a roadmap for analyzing large, multimodal datasets, the machine learning analysis revealed imaging phenotypes that predicted diagnostic membership of magnetic resonance imaging scans of individuals with AUD, HIV, and their comorbidity.
Keywords: Alcoholism | Brain imaging | Comorbidity | Disease patterns | HIV infection | Machine learning
Advanced atherosclerosis imaging by CT: Radiomics, machine learning and deep learning
تصویربرداری پیشرفته آترواسکلروز توسط CT: رادیولوژی ، یادگیری ماشین و یادگیری عمیق-2019
In the last decade, technical advances in the field of medical imaging significantly improved and broadened the application of coronary CT angiography (CCTA) for the non-invasive assessment of coronary artery disease. Recently, similar breakthroughs are happening in the post-processing, analysis and interpretation of radiological images. Technologies such as radiomics allow to extract significantly more information from scans than what human visual assessment is capable of. This allows the precision phenotyping of diseases based on medical images. The increased amount of information can then be analyzed using novel data analytic techniques such as machine learning (ML) and deep learning (DL), which utilize the power of big data to build predictive models, which seek to mimic human intelligence, artificially. Thanks to big data availability and increased computational power, these novel analytic methods are outperforming conventional statistical techniques. In this current overview we describe the basics of radiomics, ML and DL, highlighting similarities, differences, limitations and potential pitfalls of these techniques. In addition, we provide a brief overview of recently published results on the applications of the aforementioned techniques for the non-invasive assessment of coronary atherosclerosis using CCTA.
Keywords: Atherosclerosis | Coronary CT angiography | Radiomics | Machine learning | Deep learning
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis
مقایسه عملکرد یادگیری عمیق در برابر متخصصان مراقبت های بهداشتی در تشخیص بیماری ها از تصویربرداری پزشکی: یک مرور منظم و متاآنالیز-2019
Background Deep learning offers considerable promise for medical diagnostics. We aimed to evaluate the diagnostic accuracy of deep learning algorithms versus health-care professionals in classifying diseases using medical imaging. Methods In this systematic review and meta-analysis, we searched Ovid-MEDLINE, Embase, Science Citation Index, and Conference Proceedings Citation Index for studies published from Jan 1, 2012, to June 6, 2019. Studies comparing the diagnostic performance of deep learning models and health-care professionals based on medical imaging, for any disease, were included. We excluded studies that used medical waveform data graphics material or investigated the accuracy of image segmentation rather than disease classification. We extracted binary diagnostic accuracy data and constructed contingency tables to derive the outcomes of interest: sensitivity and specificity. Studies undertaking an out-of-sample external validation were included in a meta-analysis, using a unified hierarchical model. This study is registered with PROSPERO, CRD42018091176. Findings Our search identified 31 587 studies, of which 82 (describing 147 patient cohorts) were included. 69 studies provided enough data to construct contingency tables, enabling calculation of test accuracy, with sensitivity ranging from 9·7% to 100·0% (mean 79·1%, SD 0·2) and specificity ranging from 38·9% to 100·0% (mean 88·3%, SD 0·1). An out-of-sample external validation was done in 25 studies, of which 14 made the comparison between deep learning models and health-care professionals in the same sample. Comparison of the performance between health-care professionals in these 14 studies, when restricting the analysis to the contingency table for each study reporting the highest accuracy, found a pooled sensitivity of 87·0% (95% CI 83·0–90·2) for deep learning models and 86·4% (79·9–91·0) for health-care professionals, and a pooled specificity of 92·5% (95% CI 85·1–96·4) for deep learning models and 90·5% (80·6–95·7) for health-care professionals. Interpretation Our review found the diagnostic performance of deep learning models to be equivalent to that of health-care professionals. However, a major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample. Additionally, poor reporting is prevalent in deep learning studies, which limits reliable interpretation of the reported diagnostic accuracy. New reporting standards that address specific challenges of deep learning could improve future studies, enabling greater confidence in the results of future evaluations of this promising technology.
DeepClas4Bio: Connecting bioimaging tools with deep learning frameworks for image classification
DeepClas4Bio: اتصال ابزارهای تصویربرداری با چارچوبهای یادگیری عمیق برای طبقه بندی تصویر-2019
Background and objective: Deep learning techniques have been successfully applied to tackle several image classification problems in bioimaging. However, the models created from deep learning frameworks cannot be easily accessed from bioimaging tools such as ImageJ or Icy; this means that life scientists are not able to take advantage of the results obtained with those models from their usual tools. In this paper, we aim to facilitate the interoperability of bioimaging tools with deep learning frameworks. Methods: In this project, called DeepClas4Bio, we have developed an extensible API that provides a common access point for classification models of several deep learning frameworks. In addition, this API might be employed to compare deep learning models, and to extend the functionality of bioimaging programs by creating plugins. Results: Using the DeepClas4Bio API, we have developed a metagenerator to easily create ImageJ plugins. In addition, we have implemented a Java application that allows users to compare several deep learning models in a simple way using the DeepClas4Bio API. Moreover, we present three examples where we show how to work with different models and frameworks included in the DeepClas4Bio API using several bioimaging tools — namely, ImageJ, Icy and ImagePy. Conclusions: This project brings to the table benefits from several perspectives. Developers of deep learning models can disseminate those models using well-known tools widely employed by life-scientists. Developers of bioimaging programs can easily create plugins that use models from deep learning frameworks. Finally, users of bioimaging tools have access to powerful tools in a known environment for them.
Keywords: Deep learning | Bioimaging | Image classification | Interoperability