دانلود و نمایش مقالات مرتبط با big data security::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - big data security

تعداد مقالات یافته شده: 8
ردیف عنوان نوع
1 Extending the limits for big data RSA cracking: Towards cache-oblivious TU decomposition
گسترش محدودیت های بزرگ برای شکستن داده RSA: به سمت حافظه نهان-فراموشی TU-2020
Nowadays, Big Data security processes require mining large amounts of content that was traditionally not typically used for security analysis in the past. The RSA algorithm has become the de facto standard for encryption, especially for data sent over the internet. RSA takes its security from the hardness of the Integer Factorisation Problem. As the size of the modulus of an RSA key grows with the number of bytes to be encrypted, the corresponding linear system to be solved in the adversary integer factorisation algorithm also grows. In the age of big data this makes it compelling to redesign linear solvers over finite fields so that they exploit the memory hierarchy. To this end, we examine several matrix layouts based on space-filling curves that allow for a cache-oblivious adaptation of parallel TU decomposition for rectangular matrices over finite fields. The TU algorithm of Dumas and Roche (2002) requires index conversion routines for which the cost to encode and decode the chosen curve is significant. Using a detailed analysis of the number of bit operations required for the encoding and decoding procedures, and filtering the cost of lookup tables that represent the recursive decomposition of the Hilbert curve, we show that the Morton-hybrid order incurs the least cost for index conversion routines that are required throughout the matrix decomposition as compared to the Hilbert, Peano, or Morton orders. The motivation lies in that cache efficient parallel adaptations for which the natural sequential evaluation order demonstrates lower cache miss rate result in overall faster performance on parallel machines with private or shared caches and on GPU’s.
Keywords: Exact linear algebra | Cache-oblivious algorithms | Space-filling curves | Morton-hybrid order
مقاله انگلیسی
2 به سوی امنیت داده‌های مبتنی بر DNA در محیط محاسبه ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 9 - تعداد صفحات فایل doc فارسی: 36
امروزه اندازه داده‌ها روزانه از گیگابایت به ترابایت یا حتی پتابایت افزایش می‌یابد که عمدتا ً به دلیل تکامل مقادیر زیادی از داده‌های زمان حقیقی است . اکثر داده‌های بزرگ از طریق اینترنت منتقل می‌شوند و آن‌ها در محیط محاسبه ابری ذخیره می‌شوند . از آنجا که رایانش ابری خدمات مبتنی بر اینترنت را فراهم می‌کند , مهاجمان و کاربران مخربی نیز وجود دارند . آن‌ها همیشه سعی می‌کنند بدون داشتن حق دسترسی به داده‌های بزرگ و محرمانه کاربر به آنها دسترسی داشته باشند . گاهی اوقات آن‌ها داده‌های اصلی را با هر گونه داده جعلی جایگزین می‌کنند . بنابراین , امنیت داده‌های بزرگ اخیرا ً به یک نگرانی عمده تبدیل شده‌است . محاسبه اسید دی اکسی ریبونوکلئیک( DNA ) یک میدان نوظهور برای بهبود امنیت داده‌ها است که براساس مفهوم زیستی DNA است . یک طرح رمزگذاری داده مبتنی بر DNA در این مقاله برای محیط محاسبه ابری پیشنهاد شده‌است . در اینجا , یک کلید سری ۱۰۲۴ بیتی براساس محاسبه DNA , ویژگی‌های کاربر و کنترل دسترسی رسانه‌ای ( MAC ) , کد استاندارد آمریکا برای تبادل اطلاعات ( ASCII ) , پایگاه‌های DNA و قانون مکمل برای تولید کلید رمز مورد استفاده قرار می‌گیرد که این سیستم را قادر می‌سازد تا در برابر بسیاری از حملات امنیتی محافظت کند . نتایج آزمایشی و نیز تحلیل‌های تئوریک , کارایی و کارایی طرح پیشنهادی را بر روی برخی از طرح‌های شناخته‌شده موجود نشان می‌دهند .
واژگان کاربردی : رایانش ابری | محاسبات DNA | امنیت داده‌های بزرگ | نشانی MAC | قانون کمل | شبیه ساز ابری
مقاله ترجمه شده
3 Towards DNA based data security in the cloud computing environment
به سمت امنیت داده های مبتنی بر DNA در محیط محاسبات ابری-2020
Nowadays, data size is increasing day by day from gigabytes to terabytes or even petabytes, mainly because of the evolution of a large amount of real-time data. Most of the big data is transmitted through the internet and they are stored on the cloud computing environment. As cloud computing provides internet-based services, there are many attackers and malicious users. They always try to access user’s confidential big data without having the access right. Sometimes, they replace the original data by any fake data. Therefore, big data security has become a significant concern recently. Deoxyribonucleic Acid (DNA) computing is an advanced emerged field for improving data security, which is based on the biological concept of DNA. A novel DNA based data encryption scheme has been proposed in this paper for the cloud computing environment. Here, a 1024-bit secret key is generated based on DNA computing, user’s attributes and Media Access Control (MAC) address of the user, and decimal encoding rule, American Standard Code for Information Interchange (ASCII) value, DNA bases and complementary rule are used to generate the secret key that enables the system to protect against many security attacks. Experimental results, as well as theoretical analyses, show the efficiency and effectivity of the proposed scheme over some well-known existing schemes.
Keywords: Cloud computing | DNA computing | Big data security | MAC address | Complementary rule | CloudSim
مقاله انگلیسی
4 TPTVer: A Trusted Third Party Based Trusted Verifier for Multi-Layered Outsourced Big Data System in Cloud Environment
TPTVer: یک تایید کننده معتبر مبتنی بر شخص ثالث برای سیستم داده های بزرگ برون سپاری چند لایه در محیط ابری-2018
Cloud computing is very useful for big data owner who doesn’t want to manage IT infrastructure and big data technique details. However, it is hard for big data owner to trust multi-layer outsourced big data system in cloud environment and to verify which outsourced service leads to the problem. Similarly, the cloud service provider cannot simply trust the data computation applications. At last, the verification data itself may also leak the sensitive information from the cloud service provider and data owner. We propose a new three-level definition of the verification, threat model, corresponding trusted policies based on different roles for outsourced big data system in cloud. We also provide two policy enforcement methods for building trusted data computation environment by measuring both the MapReduce application and its behaviors based on trusted computing and aspect-oriented programming. To prevent sensitive information leakage from verification process, we provide a privacy-preserved verification method. Finally, we implement the TPTVer, a Trusted third Party based Trusted Verifier as a proof of concept system. Our evaluation and analysis show that TPTVer can provide trusted verification for multi-layered outsourced big data system in the cloud with low overhead.
Keywords: big data security; outsourced ser vice security; MapReduce behavior; trusted verification; trusted third party
مقاله انگلیسی
5 A Bi-objective Hyper-Heuristic Support Vector Machines for Big Data Cyber-Security
یک بردار حمایتی بیش از حد حقیقی بی هدف ماشین آلات برای داده های بزرگ امنیت سایبری -2018
Cyber security in the context of big data is known to be a critical problem and presents a great challenge to the research community. Machine learning algorithms have been suggested as candidates for handling big data security problems. Among these algorithms, support vector machines (SVMs) have achieved remarkable success on various classification problems. However, to establish an effective SVM, the user needs to define the proper SVM configuration in advance, which is a challenging task that requires expert knowledge and a large amount of manual effort for trial and error. In this paper, we formulate the SVM configuration process as a bi-objective optimization problem in which accuracy and model complexity are considered as two conflicting objectives. We propose a novel hyper-heuristic framework for bi-objective optimization that is independent of the problem domain. This is the first time that a hyper-heuristic has been developed for this problem. The proposed hyper-heuristic framework consists of a high-level strategy and low-level heuristics. The high-level strategy uses the search performance to control the selection of which low-level heuristic should be used to generate a new SVM configuration. The low-level heuristics each use different rules to effectively explore the SVM configuration search space. To address bi-objective optimization, the proposed framework adaptively integrates the strengths of decomposition- and Paretobased approaches to approximate the Pareto set of SVM configurations. The effectiveness of the proposed framework has been evaluated on two cyber security problems: Microsoft malware big data classification and anomaly intrusion detection. The obtained results demonstrate that the proposed framework is very effective, if not superior, compared with its counterparts and other algorithms.
INDEX TERMS: Hyper-heuristics, big data, cyber security, optimisation
مقاله انگلیسی
6 An approach for Big Data Security based on Hadoop Distributed File system
یک رویکرد برای امنیت داده های بزرگ مبتنی بر سیستم فایل توزیع هادوپ-2018
Cloud computing appeared for huge data because of its ability to provide users with on-demand, reliable, flexible, and low-cost services. With the increasing use of cloud applications, data security protection has become an important issue for the cloud. In this work, the proposed approach was used to improve the performance of encryption /Decryption file by using AES and OTP algorithms integrated on Hadoop. Where files are encrypted within the HDFS and decrypted within the Map Task. Encryption /Decryption in previous works used AES algorithm, the size of the encrypted file increased by 50% from the original file size. The proposed approach improved this ratio as the size of the encrypted file increased by 20% from the original file size. Also, we have compared this approach with the previously implemented method, we implement this new approach to secure HDFS, and some experimental studies were conducted to verify its effectiveness.
Keywords: Cloud storage, Hadoop, HDFS, Data Security, Encryption, Decryption
مقاله انگلیسی
7 Big Data Security Issues and challenges
چالش ها و مسائل امنیتی داده های بزرگ -2016
We have entered in data deluge already. Data Deluge means data generated by IoT devices and humans simultaneously. The data deluge is a Big threat for technologist but beneficial for end users. Now the coming problem is the security of this data. Big Data is too big, too fast and too diverse that does not compile with traditional data base system. Traditional data base systems are very good to analyze structured data but these systems are not enough to analyze unstructured data. In this paper we discourse the possible challenges and security issues related to Big Data characteristics and possible solutions.
Keywords: Anonymization | Big Data | Unstructured Data | IoT | Traditional DBMS | Machine Learning | Data Mining
مقاله انگلیسی
8 Big data security
امنیت داده های بزرگ-2012
The term big data has come into use recently to refer to the ever-increasing amount of information that organisations are storing, processing and analysing, owing to the growing number of information sources in use. According to research conducted by IDC, there were 1.8 zettabytes (1.8 trillion gigabytes) of information created and replicated in 2011 alone and that amount is doubling every two years. Within the next decade, the amount of information managed by enterprise datacentres will grow by 50 times, whereas the number of IT professionals will expand by just 1.5 times.
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 5774 :::::::: بازدید دیروز: 3097 :::::::: بازدید کل: 40041 :::::::: افراد آنلاین: 54