Dependence structure of Gabor wavelets based on copula for face recognition
ساختار وابستگی Gabor wavelets بر اساس کوپول برای تشخیص چهره-2019
Low resolution, difficult illumination and noise are the important factors that affect the performance of face recognition system. In order to counteract these adverse factors, in this paper we propose copula probability models based on Gabor wavelets for face recognition. Gabor wavelets have robust performance under lighting and noise conditions. The strong dependencies exist in the domain of Gabor wavelets due to their non-orthogonal property. In the light of the structure characteristic of Gabor wavelet sub- bands, the proposed methods use copula to capture the dependencies to represent the face image. Three probability-model-based methods CF-GW (Copula Function of Gabor Wavelets), LCM-GW (Lightweight Copula Model of Gabor Wavelets) and LCM-GW-PSO (Lightweight Copula Model of Gabor Wavelets with Particle Swarm Optimization) are proposed for face recognition. Experiments of face recognition show our proposed methods are more robust under the conditions of low resolution, lighting and noise than the popular methods such as the LBP-based methods and other Gabor-based methods. The face features extracted by our methods belong to the Riemannian manifold which is different to Euclidean space. In order to deal the issue of face recognition in complex environment, we can combine the face features in Riemannian manifold with the face features in Euclidean space to obtain the more robust face recognition system by using expert system technologies such as reasoning model and multi-classifier fusion.
Keywords: Face recognition | Gabor wavelets | Gaussian copula | Covariance matrix | Particle swarm optimization
Robust VaR and CVaR optimization under joint ambiguity in distributions, means, and covariances
بهینه سازی VaR و CVaR، تحت ابهام مشترک در توزیع ها، معنی ها و کوواریانس ها-2018
We develop robust models for optimization of the VaR (value at risk) and CVaR (conditional value at risk) risk measures with a minimum expected return constraint under joint ambiguity in distribution, mean returns, and covariance matrix. We formulate models for ellipsoidal, polytopic, and interval ambiguity sets of the means and covariances. The models unify and/or extend several existing models. We also show how to overcome the well-known conservativeness of robust optimization models by proposing an algorithm and a heuristic for constructing joint ellipsoidal ambiguity sets from point estimates given by multiple securities analysts. Using a controlled experiment we show how the well-known sensitivity of CVaR to mis-specifications of the first four moments of the distribution is alleviated with the robust models. Finally, applying the model to the active management of portfolios of sovereign credit default swaps (CDS) from Eurozone core and periphery, and Central, Eastern and South-Eastern Europe countries, we illustrate that investment strategies using robust optimization models perform well out-of-sample, even during the eurozone crisis. We consider both buy-and-hold and active management strategies.
Keywords: Risk management ، Data ambiguity ، Coherent risk measures ، Portfolio optimization ، Eurozone crisis
Robust normal estimation and region growing segmentation of infrastructure 3D point cloud models
تخمین قوی نرمال و تقسیم بندی ناحیه رشد و زیر بنای مدل ابری نقطه ای-2017
Modern remote sensing technologies such as three-dimensional (3D) laser scanners and image-based 3D scene reconstruction are in increasing demand for applications in civil infrastructure design, mainte- nance, operation, and as-built construction verification. The complex nature of the 3D point clouds these technologies generate, as well as the often massive scale of the 3D data, make it inefficient and time con- suming to manually analyze and manipulate point clouds, and highlights the need for automated analysis techniques. This paper presents one such technique, a new region growing algorithm for the automated segmentation of both planar and non-planar surfaces in point clouds. A core component of the algorithm is a new point normal estimation method, an essential task for many point cloud processing algorithms. The newly developed estimation method utilizes robust multivariate statistical outlier analysis for reli- able normal estimation in complex 3D models, considering that these models often contain regions of varying surface roughness, a mixture of high curvature and low curvature regions, and sharp features. An adaptation of Mahalanobis distance, in which the mean vector and covariance matrix are derived from a high-breakdown multivariate location and scale estimator called Deterministic MM-estimator (DetMM) is used to find and discard outlier points prior to estimating the best local tangent plane around any point in a cloud. This approach is capable of more accurately estimating point normals located in highly curved regions or near sharp features. Thereafter, the estimated point normals serve a region growing segmen- tation algorithm that only requires a single input parameter, an improvement over existing methods which typically require two control parameters. The reliability and robustness of the normal estimation subroutine was compared against well-known normal estimation methods including the Minimum Volume Ellipsoid (MVE) and Minimum Covariance Determinant (MCD) estimators, along with Maximum Likelihood Sample Consensus (MLESAC). The overall region growing segmentation algorithm was then experimentally validated on several challenging 3D point clouds of real-world infrastructure systems. The results indicate that the developed approach performs more accurately and robustly in comparison with conventional region growing methods, particularly in the presence of sharp features, outliers and noise.© 2017 Elsevier Ltd. All rights reserved.
Keywords: Segmentation | 3D point cloud models | Robust estimation | Outliers | 3D reconstruction | Computer vision | Normal estimation | 3D data processing
Photo-z-SQL: Integrated, flexible photometric redshift computation in a database
عکس-Z-SQL: مجتمع، انعطاف پذیر محاسبات انتقال به سرخ فتومتریک در یک پایگاه داده-2017
We present a flexible template-based photometric redshift estimation framework, implemented in C#, that can be seamlessly integrated into a SQL database (or DB) server and executed on-demand in SQL. The DB integration eliminates the need to move large photometric datasets outside a database for redshift estimation, and utilizes the computational capabilities of DB hardware. The code is able to perform both maximum likelihood and Bayesian estimation, and can handle inputs of variable photometric filter sets and corresponding broad-band magnitudes. It is possible to take into account the full covariance matrix between filters, and filter zero points can be empirically calibrated using measurements with given redshifts. The list of spectral templates and the prior can be specified flexibly, and the expensive synthetic magnitude computations are done via lazy evaluation, coupled with a caching of results. Parallel execution is fully supported. For large upcoming photometric surveys such as the LSST, the ability to perform in place photo-z calculation would be a significant advantage. Also, the efficient handling of variable filter sets is a necessity for heterogeneous databases, for example the Hubble Source Catalog, and for cross match services such as SkyQuery. We illustrate the performance of our code on two reference photo z estimation testing datasets, and provide an analysis of execution time and scalability with respect to different configurations. The code is available for download at https://github.com/beckrob/Photo-z-SQL.
Keywords: Galaxies: distances and redshifts | Techniques: photometric | Astronomical databases: miscellaneous
Statistical analysis of big data on pharmacogenomics
تجزیه و تحلیل آماری داده های بزرگ در ژنومیک دارو-2013
This paper discusses statistical methods for estimating complex correlation structure from large pharmacogenomic datasets. We selectively review several prominent statistical methods for estimating large covariance matrix for understanding correlation structure, inverse covariance matrix for network modeling, large-scale simultaneous tests for selecting significantly differently expressed genes and proteins and genetic markers for complex diseases, and high dimensional variable selection for identifying important molecules for understanding molecule mechanisms in pharmacogenomics. Their applications togene network estimation andbiomarker selection are used to illustrate the methodological power. Several new challenges of Big data analysis, including complex data distribution, missing data, measurement error, spurious correlation, endogeneity, and the need for robust statistical methods, are also discussed. Keywords: Big data High dimensional statistics Approximate factor model Graphical model Multiple testing Variable selection Marginal screening Robust statistics
درباره پارامترهای پیش بینی جنبش زمین برای تحلیل خطرات احتمالی ناشی از زلزله
سال انتشار: 2011 - تعداد صفحات فایل pdf انگلیسی: 21 - تعداد صفحات فایل doc فارسی: 31
کاملاً مشخص شده که دامنه کاربرد یک مدل تجربی پیش بینی جنبش زمین توسط دامنه¬ی متغیرهای پیش بینی کننده ای که داده های مورد استفاده در تحلیل شامل می شوند، محدود می شود. اگرچه، در تحلیل خطرات احتمالی ناشی از زلزله (PSHA)، محدودیت های استفاده از مدل های پیش بینی جنبش زمین (GMPMs) اغلب نادیده گرفته می شوند، و روابط تجربی برون¬یابی می شوند. در این مقاله، نشان می دهیم که این برون¬یابی موجب افزایش زیاد عدم قطعیت در GMPM هنگام استفاده از آن برای پیش بینی پارامترهای شدت می شود. این افزایش، که مشخصاً ماهیت شناختی دارد، به فرم عملکردی انتخابی، ماتریس کوواریانس ضرایب رگرسیون، تکنیک های رگرسیون مورد استفاده، و کیفیت مجموعه داده ها بستگی دارد. به علاوه، به وسیله مثال هایی از پایگاه داده¬ی پروژه¬ی نسل بعدی مدل های تضعیف جنبش زمین و برخی فرم های عملی مساعد، افزایش خطرات زلزله ای به وجود آمده از برون¬یابی GMPM ها را بررسی می کنیم.
|مقاله ترجمه شده|