ردیف | عنوان | نوع |
---|---|---|
1 |
Problems of Poison: New Paradigms and "Agreed" Competition in the Era of AI-Enabled Cyber Operations
مسئله سم: پارادایم های جدید و رقابت "توافق شده" در عصر عملیات سایبری با هوش مصنوعی-2020 Few developments seem as poised to alter the characteristics of security in
the digital age as the advent of artificial intelligence (AI) technologies. For national
defense establishments, the emergence of AI techniques is particularly worrisome,
not least because prototype applications already exist. Cyber attacks augmented by
AI portend the tailored manipulation of human vectors within the attack surface of
important societal systems at great scale, as well as opportunities for calamity resulting
from the secondment of technical skill from the hacker to the algorithm. Arguably
most important, however, is the fact that AI-enabled cyber campaigns contain great
potential for operational obfuscation and strategic misdirection. At the operational
level, techniques for piggybacking onto routine activities and for adaptive evasion of
security protocols add uncertainty, complicating the defensive mission particularly
where adversarial learning tools are employed in offense. Strategically, AI-enabled
cyber operations offer distinct attempts to persistently shape the spectrum of cyber
contention may be able to pursue conflict outcomes beyond the expected scope of
adversary operation. On the other, AI-augmented cyber defenses incorporated into
national defense postures are likely to be vulnerable to “poisoning” attacks that
predict, manipulate and subvert the functionality of defensive algorithms. This article
takes on two primary tasks. First, it considers and categorizes the primary ways in
which AI technologies are likely to augment offensive cyber operations, including the
shape of cyber activities designed to target AI systems. Then, it frames a discussion
of implications for deterrence in cyberspace by referring to the policy of persistent engagement, agreed competition and forward defense promulgated in 2018 by the United States. Here, it is argued that the centrality of cyberspace to the deployment
and operation of soon-to-be-ubiquitous AI systems implies new motivations for
operation within the domain, complicating numerous assumptions that underlie
current approaches. In particular, AI cyber operations pose unique measurement
issues for the policy regime. Keywords: deterrence | persistent engagement | cyber | AI | machine learning |
مقاله انگلیسی |
2 |
When Autonomous Systems Meet Accuracy and Transferability through AI: A Survey
وقتی سیستم های خودمختار با دقت و قابلیت انتقال از طریق هوش مصنوعی روبرو می شوند : بررسی-2020 With widespread applications of artificial intelligence (AI), the capabilities of the perception, understanding,
decision-making, and control for autonomous systems have improved significantly in recent years. When
autonomous systems consider the performance of accuracy and transferability, several AI methods, such
as adversarial learning, reinforcement learning (RL), and meta-learning, show their powerful performance.
Here, we review the learning-based approaches in autonomous systems from the perspectives of accuracy
and transferability. Accuracy means that a well-trained model shows good results during the testing phase, in
which the testing set shares a same task or a data distribution with the training set. Transferability means that
when a well-trained model is transferred to other testing domains, the accuracy is still good. Firstly, we introduce
some basic concepts of transfer learning and then present some preliminaries of adversarial learning,
RL, and meta-learning. Secondly, we focus on reviewing the accuracy or transferability or both of these approaches
to show the advantages of adversarial learning, such as generative adversarial networks, in typical
computer vision tasks in autonomous systems, including image style transfer, image super-resolution, image
deblurring/dehazing/rain removal, semantic segmentation, depth estimation, pedestrian detection, and person
re-identification. We furthermore review the performance of RL and meta-learning from the aspects of
accuracy or transferability or both of them in autonomous systems, involving pedestrian tracking, robot
navigation, and robotic manipulation. Finally, we discuss several challenges and future topics for the use
of adversarial learning, RL, and meta-learning in autonomous systems. |
مقاله انگلیسی |
3 |
روش یادگیری متخاصم عمیق و چند مرحله ای ، برای باز شناسی شخص مبتنی بر ویدئو
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 13 - تعداد صفحات فایل doc فارسی: 42 بازشناسی شخص (re-ID) بر مبنای ویدئو را میتوان به عنوان فرآیند تطبیق تصویر یک فرد از طریق دیدهای مختلف دوربین که به وسیله ی تصاویر ویدئویی ناهم راستا گرفته شده است، در نظر گرفت. روش هایی که برای اینکار وجود دارند، از سیگنال های نظارتی برای بهینه سازی فضای پیش روی دوربین استفاده نموده که تحت این شرایط، فاصله ی بین ویدئوها بیشینه سازی/کمینه سازی میشود. البته این کار باعث شده تا برچسب گذاری افراد در سطح دید های ویدئو بسیار زیاد شده و باعث شده تا نتوان آنها را به خوبی بر روی دوربین های شبکه بندی شده ی بزرگ مقیاس بندی کرد. همچنین خاطر نشان شده است که یادگیری نمایش های مختلف ویدئویی و آنهم به وسیله ی عدم تغییر دید دوربین را نمیتوان انجام داد چرا که ویژگی های تصویر، هر کدام دارای توزیع های مختلف مختص به خود میباشند. بنابراین تطبیق ویدئوها برای باز شناسی افراد، نیاز به مدل هایی انعطاف پذیر برای بدست آوردن پویایی های موجود در مشاهدات ویدئویی و یادگیری دیدهای ثابت از طریق دسترسی به نمونه های آموزشی برچسب دار و محدود دارد. در این مقاله قصد داریم یک روش مبتنی بر یادگیری عمیق چند مرحله ای را برای باز شناسی یک فرد بر مبنای ویدئو ارائه دهیم و بتوانیم به یادگیری دیدهای قابل قیاسی از این فرد که متمایز هستند بپردازیم. روش پیشنهادی را بر روی شبکه های عصبی باز رخداد گر متغیر (VRNN) توسعه داده ایم و آنرا به منظور ایجاد متغیر های پنهان با وابستگی های موقت که بسیار متمایز بوده ولی در تطبیق تصاویر فرد از نظر دید ثابت میباشد، مورد یادگیری قرار داده ایم. آزمایش های وسیعی را بر روی سه مجموعه ی داده ای بنچ مارک انجام داده ایم و به صورت تجربی به اثبات قابلیت روش پیشنهادی مان در ایجاد ویژگی های موقتی و با یک دید ثابت و کارائی بالایی که به وسیله ی آن بدست آمده است خواهیم پرداخت.
کلمات کلیدی: باز شناسی شخص مبتنی بر ویدئو | شبکه های عصبی باز رخدادگر متغیر | یادگیری متخاصم |
مقاله ترجمه شده |
4 |
APL: Adversarial Pairwise Learning for Recommender Systems
APL: یادگیری خصمانه طرف مقابل برای سیستم های توصیه گر-2019 The main objective of recommender systems is to help users select their desired items, where a ma- jor challenge is modeling users’ preferences based on their historical feedback (e.g., clicks, purchases or check-ins). Recently, several recommendation models have utilized the adversarial technique, which has been successfully used to capture real data distributions in various domains (e.g., computer vision). Nevertheless, the training process of the original adversarial technique is very slow and unstable in the domain of recommender systems. First, the sparsity of the implicit feedback dataset aggravates the inherently intractable adversarial training process. Second, since the original adversarial model is de- signed for differentiable values (e.g., images), the discrete items also increase the training difficulty. To cope with these issues, we propose a novel method named Adversarial Pairwise Learning (APL), which unifies generative and discriminative models via adversarial learning. Specifically, based on the weaker assumption that the user prefers observed items over generated items, APL exploits pairwise ranking to accelerate the convergence and enhance the stability of adversarial learning. Additionally, a differ- entiable procedure is adopted to replace the discrete item sampling to optimize APL via backpropaga- tion and stabilize the training process. Extensive experiments under multiple recommendation scenarios demonstrate APL’s effectiveness, fast convergence and stability. Our implementation of APL is available at: https://github.com/ZhongchuanSun/APL. Keywords: Adversarial learning | Pairwise ranking | Matrix factorization | Recommender systems |
مقاله انگلیسی |