دانلود و نمایش مقالات مرتبط با Reliability::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی
نتیجه جستجو - Reliability

تعداد مقالات یافته شده: 281
ردیف عنوان نوع
1 Factors influencing the adoption of mHealth services in a developing country: A patient-centric study
عوامل مؤثر بر اتخاذ خدمات بهداشتی و درمانی در یک کشور در حال توسعه: یک مطالعه بیمار محور-2020
mHealth under the umbrella of eHealth has become an essential tool for providing quality, accessible and equal health care services at an affordable cost. Despite the potential benefits of mHealth, its adoption remains a big challenge in developing countries such as Bangladesh. This study aims to examine the factors affecting the adoption of mHealth services in Bangladesh by using the extended Unified Theory of Acceptance and Use of Technology (UTAUT) model with perceived reliability and price value factors. It also examines the moderating effect of gender on the intention to use and on the actual usage behavior of users of mHealth services. A wellstructured face-to-face survey was employed to collect the data. Structural equation modeling (SEM) with a partial least squares method was used to analyze the data collected from 296 generation Y participants. The results confirmed that performance expectancy, social influence, facilitating conditions and perceived reliability positively influence the behavioral intention to adopt mHealth services. However, effort expectancy and price value did not have a significance influence on the behavioral intention. Moreover, Gender has a significant moderating effect on mHealth services adoption in certain cases. Finally, the theoretical and practical implications of this study are also discussed.
Keywords: mHealth | Developing countries | UTAUT model | Generation Y | Bangladesh
مقاله انگلیسی
2 الگوریتم تکاملی چند هدفی مبتنی بر شبکه عصبی برای زمانبندی گردش کار پویا در محاسبات ابری
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 16 - تعداد صفحات فایل doc فارسی: 45
زمانبندی گردشکار یک موضوع پژوهشی است که به طور گسترده در محاسبات ابری مورد مطالعه قرار گرفته است و از منابع ابری برای کارهای گردش کار استفاده می¬شود و برای این منظور اهداف مشخص شده در QoS را لحاظ می¬کند. در این مقاله، مسئله زمانبندی گردش کار پویا را به عنوان یک مسئله بهینه سازی چند هدفه پویا (DMOP) مدل می¬کنیم که در آن منبع پویایی سازی بر اساس خرابی منابع و تعداد اهداف است که ممکن است با گذر زمان تغییر کنند. خطاهای نرم افزاری و یا نقص سخت افزاری ممکن است باعث ایجاد پویایی نوع اول شوند. از سوی دیگر مواجهه با سناریوهای زندگی واقعی در محاسبات ابری ممکن است تعداد اهداف را در طی اجرای گردش کار تغییر دهد. در این مطالعه یک الگوریتم تکاملی چند هدفه پویا مبتنی بر پیش بینی را به نام الگوریتم NN-DNSGA-II ارائه می¬دهیم و برای این منظور شبکه عصبی مصنوعی را با الگوریتم NGSA-II ترکیب می¬کنیم. علاوه بر این پنج الگوریتم پویای مبتنی بر غیرپیش بینی از ادبیات موضوعی برای مسئله زمانبندی گردش کار پویا ارائه می¬شوند. راه¬حل¬های زمانبندی با در نظر گرفتن شش هدف یافت می¬شوند: حداقل سازی هزینه ساخت، انرژی و درجه عدم تعادل و حداکثر سازی قابلیت اطمینان و کاربرد. مطالعات تجربی مبتنی بر کاربردهای دنیای واقعی از سیستم مدیریت گردش کار Pegasus نشان می¬دهد که الگوریتم NN-DNSGA-II ما به طور قابل توجهی از الگوریتم¬های جایگزین خود در بیشتر موارد بهتر کار می¬کند با توجه به معیارهایی که برای DMOP با مورد واقعی پارتو بهینه در نظر گرفته می¬شود از جمله تعداد راه¬حل¬های غیرغالب، فاصله¬گذاری Schott و شاخص Hypervolume.
مقاله ترجمه شده
3 Truth finding by reliability estimation on inconsistent entities for heterogeneous data sets
یافتن حقیقت با برآورد قابلیت اطمینان در واحدهای متناقض برای مجموعه داده های ناهمگن-2020
An important task in big data integration is to derive accurate data records from noisy and conflicting values collected from multiple sources. Most existing truth finding methods assume that the reliability is consistent on the whole data set, ignoring the fact that different attributes, objects and object groups may have different reliabilities even wrt the same source. These reliability differences are caused by the hardness differences in obtaining attribute values, non-uniform updates to objects and the differences in group privileges. This paper addresses the problem how to compute truths by effectively estimating the reliabilities of attributes, objects and object groups in a multi-source heterogeneous data environment. We first propose an optimization framework TFAR, its implementation and Lagrangian duality solution for Truth Finding by Attribute Reliability estimation. We then present a Bayesian probabilistic graphical model TFOR and an inference algorithm applying Collapsed Gibbs Sampling for Truth Finding by Object Reliability estimation. Finally we give an optimization framework TFGR and its implementation for Truth Finding by Group Reliability estimation. All these models lead to a more accurate estimation of the respective attribute, object and object group reliabilities, which in turn can achieve a better accuracy in inferring the truths. Experimental results on both real data and synthetic data show that our methods have better performance than the state-of-art truth discovery methods.
Keywords: Truth finding | Attribute reliability | Object reliability | Group reliability | Entity hardness | Probability graphical mod
مقاله انگلیسی
4 به سوی تقسیم بندی شبکه 5G برای شبکه های ادهاک خودرویی: یک رویکرد انتها به انتها
سال انتشار: 2020 - تعداد صفحات فایل pdf انگلیسی: 7 - تعداد صفحات فایل doc فارسی: 16
شبکه های 5G نه تنها از افزایش نرخ داده ها پشتیبانی می کنند، بکه همچنین می بایست زیرساخت مشترکی را فراهم کنند که براساس آن سرویس های جدید همراه با نیازمندی های بسیار متفاوت کیفیت سرویس (QoS) شبکه با تاخیر کمتر ارائه شود. به طور دقیق تر، کاربردهای شبکه های خودرویی چند منظوره (VANET) که اساساً گرایش آن ها به مسائل ایمنی و سرگرمی است (مانند پخش ویدیویی و مرورگر وب) در حال افزایش است. بیشتر این کاربردها دارای محدودیت های جدی از نظر تاخیر در حد چند میلی ثانیه هستند و نیاز به اطمینان پذیری بالایی دارند. پلتفورم نسل پنجم برای بررسی چنین نیازهایی نیازمند ایجاد شبکه های مجازی برنامه پذیر و راهکارهای مختلف ترافیکی همانند تقسیم بندی (برش) شبکه است. به این منظور در این مقاله یک مکانیزم تقسیم بندی پویا و برنامه پذیر انتها به انتها در شبکه LTE مبتنی بر M-CORD پیشنهاد می دهیم. یکی از ویژگی های کلیدی M-CORD که مکانیزم پیشنهاد تقسیم بندی شبکه از آن استفاده می کند، EPC مجازی است که سفارشی سازی و اصلاح را امکان پذیر می سازد. M-CORD کارکرد ضروری را برای برنامه ریزی تعاریف تقسیم بندی فراهم می کند که در آن مکانیزم پیشنهادی به طور کامل از رویکرد تعریف شده نرم افزاری خود پیروی می کند. علاوه بر این، ما نشان می دهیم که چگونه دستگاه ها انتهایی قرار گرفته در بخش های مختلف براساس QoS های متفاوت براساس نوع کاربر انتهایی تخصیص داده می شوند. این نتایج نشان می دهند که مکانیزم پیشنهادی تقسیم بندی شبکه بخش های مناسب را انتخاب می کند و منابع را به کاربران براساس نیازها و نوع سرویس آن ها اختصاص می دهد.
کلمات کلیدی: تقسیم بندی شبکه | نسل پنجم (5G) | M-CORD | LTE | NSSF | VANET
مقاله ترجمه شده
5 Rigor and reproducibility for data analysis and design in the behavioral sciences
دقت و تکرارپذیری برای تجزیه و تحلیل داده ها و طراحی در علوم رفتاری-2020
The rigor and reproducibility of science methods depends heavily on the appropriate use of statistical methods to answer research questions and make meaningful and accurate inferences based on data. The increasing analytic complexity and valuation of novel statistical and methodological approaches to data place greater emphasis on statistical review. We will outline the controversies within statistical sciences that threaten rigor and reproducibility of research published in the behavioral sciences and discuss ongoing approaches to generate reliable and valid inferences from data. We outline nine major areas to consider for generally evaluating the rigor and reproducibility of published articles and apply this framework to the 116 Behaviour Research and Therapy (BRAT) articles published in 2018. The results of our analysis highlight a pattern of missing rigor and reproducibility elements, especially pre-registration of study hypotheses, links to statistical code/output, and explicit archiving or sharing data used in analyses. We recommend reviewers consider these elements in their peer review and that journals consider publishing results of these rigor and reproducibility ratings with manuscripts to incentivize authors to publish these elements with their manuscript.
KEYWORDS: statistics | big data | reproducibility | reliability | p-hacking
مقاله انگلیسی
6 Model-based vehicular prognostics framework using Big Data architecture
چارچوب پیش آگهی های وسایل نقلیه مبتنی بر مدل با استفاده از معماری داده های بزرگ-2020
Nowadays, the continuous technological advances allow designing novel Integrated Vehicle Health Man-agement (IVHM) systems to deal with strict safety regulations in the automotive field with the aim atimproving efficiency and reliability of automotive components. However, challenging issue, which arisesin this domain, is handling a huge amount of data that are useful for prognostic. To this aim, in thispaper we propose a cloud-based infrastructure, namely Automotive predicTOr Maintenance In Cloud(ATOMIC), for prognostic analysis that leverages Big Data technologies and mathematical models of bothnominal and faulty behaviour of the automotive components to estimate on-line the End-Of-Life (EOL)and Remaining Useful Life (EUL) indicators for the automotive systems under investigation. A case studybased on the Delphi DFG1596 fuel pump has been presented to evaluate the proposed prognostic method.Finally, we perform a benchmark analysis of the deployment configurations of ATOMIC architecture interms of scalability and cost.
Keywords:Model-based prognostic analysis | Big Data analysis | Cloud computing servicesa
مقاله انگلیسی
7 Multi-objective scheduling of extreme data scientific workflows in Fog
زمانبندی چند هدفه گردش کار علمی علمی شدید در مه-2020
The concept of ‘‘extreme data’’ is a recent re-incarnation of the ‘‘big data’’ problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.
Keywords: Scheduling | Scientific workflows | Fog computing | Task offloading | Monte-Carlo simulation | Multi-objective optimization
مقاله انگلیسی
8 A reliable PUF in a dual function SRAM
PUF قابل اعتماد در SRAM با عملکرد دوگانه-2019
The Internet of Things (IoTs) employs resource-constrained sensor nodes for sensing and processing data that require robust, lightweight cryptographic primitives. The SRAM Physical Unclonable Function (SRAM-PUF) is a potential candidate for secure key generation. An SRAM-PUF is able to generate random and unique cryptographic keys based on start-up values by exploiting intrinsic manufacturing process variations. The reuse of the available on-chip SRAM memory in a system as a PUF might achieve useful cost efficiency. However, as CMOS technology scales down, aging-induced Negative Bias Temperature Instability (NBTI) becomes more pronounced resulting in asymmetric degradation of memory bit cells after prolonged storage of the same bit values. This causes unreliable start-up values for an SRAM-PUF. In this paper, the on-chip memory in the ARM architecture has been used as a case study to investigate reliability in an SRAM-PUF. We show that the bit probability in a 32-bit ARM instruction cache has a predictable pattern and hence predictable aging. Therefore, we propose using an instruction cache as a PUF to save silicon area. Furthermore, we propose a bit selection technique to mitigate the NBTI effect. We show that this technique can reduce the predicted bit error in an SRAM-PUF from 14.18% to 5.58% over 5 years. Consequently, as the bit error reduces, the area overhead of the error-correction circuitry is about 6 × smaller compared to that without a bit selection technique.
Keywords: Aging | Physical unclonable function | SRAM | Reliability
مقاله انگلیسی
9 Analytical games for knowledge engineering of expert systems in support to Situational Awareness: The Reliability Game case study
بازی های تحلیلی برای مهندسی دانش سیستم های خبره در حمایت از آگاهی وضعیتی: مطالعه موردی بازی اطمینان-2019
Knowledge Acquisition (KA) methods are of paramount importance in the design of intelligent systems. Research is ongoing to improve their effectiveness and efficiency. Analytical games appear to be a promis- ing tool to support KA. In fact, in this paper we describe how analytical games could be used for Knowl- edge Engineering of Bayesian networks, through the presentation of the case study of the Reliability Game. This game has been developed with the aim of collecting data on the impact of meta-knowledge about sources of information upon human Situational Assessment in a maritime context. In this paper we describe the computational model obtained from the dataset and how the card positions, which reflect a player belief, can be easily converted in subjective probabilities and used to learn latent constructs, such as the source reliability, by applying the Expectation-Maximisation algorithm.
Keywords: Source reliability | Expert knowledge | Knowledge acquisition | Bayesian networks | Parameter learning | Analytical game
مقاله انگلیسی
10 A new two-level information protection scheme based on visual cryptography and QR code with multiple decryptions
یک برنامه محافظت از اطلاعات دو سطح جدید مبتنی بر رمزنگاری بصری و کد QR با رمزگشایی های متعدد-2019
Nowadays, Quick Response (QR) code has been used in many fields due to its advantages, such as reliability, high-speed scanning and large data capacity. However, embedding the privacy information into the QR code lacks adequate security protection. In this paper, a new two-level information protection scheme is designed based on visual cryptography and QR code. Using any standard QR reader device or software, the public-level information can be read out directly from the shares. Moreover, the privacy-level information can be decoded by three different decryptions, which are suitable to non-computation with relative difference 1/4, lightweight computation with relative difference 1/2 and common computation environments with relative difference 1, respectively. Since the proposed scheme keeps the advantages of visual cryptography and QR code, it differs from the related schemes with low computational complexity, robustness against deformations, and high payload. The effectiveness of the proposed scheme has been proved theoretically. Experimental results and analysis demonstrate that the proposed scheme can protect two-level information with multiple decryptions, and has many benefits compared with the previous schemes.
Keywords: Two-level information protection | Visual cryptography | QR code | Multiple decryptions
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی