دانلود و نمایش مقالات مرتبط با HPC::صفحه 1
بلافاصله پس از پرداخت دانلود کنید

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - HPC

تعداد مقالات یافته شده: 44
ردیف عنوان نوع
1 System-level Power Integrity Optimization Based on High-Density Capacitors for enabling HPC/AI applications
بهینه سازی یکپارچگی قدرت در سطح سیستم مبتنی بر خازن های با چگالی بالا برای فعال کردن برنامه های HPC / AI-2020
In this work, we introduce platform-level power integrity (PI) solutions to enable high-power core IPs and highbandwidth memory (HBM) interface for HPC/AI applications. High-complexity design methodology becomes more significant to enable high-power operations of CPU/GPU/NPU that preforms iteratively tremendous computing processes. In order to achieve high-power performance at larger than 200W class, system-level PI analysis and design guide at early design stage is required to prevent drastic voltage variations at the bump under comprehensive environments including SoC, interposer, package and board characteristics. PI solutions based on highdensity on-die capacitors are suitable for mitigating voltage fluctuations by supplying quickly stored charges to silicon devices. In adopting 2-/3-plate metal-insulator-metal (MIM) capacitor with approximately 20nF/mm2 and 40nF/mm2, and integrated stacked capacitor (ISC) with approximately 300nF/mm2, it is demonstrated that voltage properties (drop and ripple) are able to be improved by system-level design optimization such as power delivery network (PDN) design and RTL-architecture manipulation. Consequently, system-level PI solutions based on high-density capacitor are anticipated to contribute to improving target performance of high-power products in response to customer’s expectation for HPC/AI applications.
Keywords: HPC/AI | high-power applications | power integrity | power delivery network | decoupling capacitor | systemlevel design optimization
مقاله انگلیسی
2 Shape-stabilized hydrated salt/paraffin composite phase change materials for advanced thermal energy storage and management
مواد تغییر فاز کامپوزیت نمک / پارافین هیدراته شده با فرم تثبیت شده برای ذخیره سازی و مدیریت انرژی پیشرفته حرارتی-2020
Thermal energy storage and management have attracted considerable interest in the field of sustainable control and utilization of energy. Thermal energy storage materials with excellent thermal properties and shape stability are in high demand. Herein, we developed a simple and effective method to fabricate hydrated salt / paraffin composite (HPC) shape-stabilized phase change materials (SSPCMs). Hydrated salt was emulsified into paraffin by an inverse emulsion template method to obtain HPC. Owing to its low volatility, paraffin enhanced the thermal stability of the hydrated salt by preventing its direct contact with the environment. Furthermore, after its crystallization, paraffin provided nucleation sites and functioned as a nucleating agent to promote the crystallization of the hydrated salt. The HPC was then simultaneously impregnated into cellulose sponge (CS), forming the SSPCMs, which exhibited excellent thermal stability, high energy storage density with a phase transition enthalpy of 227.3 J/g, and a reduced supercooling degree. Besides, there was negligible leakage during the test. The efficiency of the SSPCMs as temperature management materials was then tested by using them as a lining in a fully enclosed protective clothing.
Keywords: Hydrated salt | Paraffin | Phase change materials | Thermal stability | Supercooling degree
مقاله انگلیسی
3 Scaling laws of entangled polysaccharides
مقیاس بندی قوانین پلی ساکاریدهای گرفتار-2020
We study the dilute solution properties and entangled dynamics of hydroxypropyl cellulose (HPC), a semiflexible polymer, in aqueous solution. Intrinsic viscosity data are consistent with a polymer in θ solvent with a Kuhn length ≃22 nm. The overlap concentration, estimated as the reciprocal of the intrinsic viscosity scales with the degree of polymerisation as c* ∝ N−0.9. We evaluate different methods for estimating the entanglement crossover, following the de Gennes scaling and hydrodynamic scaling models, and show that these lead to similar results. Above the entanglement concentration, the specific viscosity, longest relaxation time and plateau modulus scale as ηsp ≃ N3.9c4.2, τ ≃ N3.9c2.4 and GP ≃ N0c1.9. A comparison with other polymers suggests that the rheological properties displayed by HPC are common to many polysaccharide systems of varying backbone composition, stiffness and solvent quality, as long as the effect of hyper-entanglements can be neglected. On the other hand, the observed scaling laws differ appreciably from those of synthetic flexible polymers in good or θ- solvent.
Keywords: Polysaccharide | Cellulose | Viscosity | Rheology | Entanglement | LCST | Hydroxypropyl cellulose | Scaling | Kuhn length | Rouse mode
مقاله انگلیسی
4 CYBELE –Fostering precision agriculture & livestock farming through secure access to large-scale HPC enabled virtual industrial experimentation environments fostering scalable big data analytics
کوبله -Fostering کشاورزی دقیق و دام کشاورزی از طریق دسترسی امن به HPC در مقیاس بزرگ فعال محیط آزمایش صنعتی مجازی پرورش تجزیه و تحلیل داده های بزرگ مقیاس پذیر-2020
According to McKinsey & Company, about a third of food produced is lost or wasted every year, amount- ing to a $940 billion economic hit. Inefficiencies in planting, harvesting, water use, reduced animal contri- butions, as well as uncertainty about weather, pests, consumer demand and other intangibles contribute to the loss. Precision Agriculture (PA) and Precision Livestock Farming (PLF) come to assist in optimiz- ing agricultural and livestock production and minimizing the wastes and costs aforementioned. PA is a technology-enabled, data-driven approach to farming management that observes, measures, and analyzes the needs of individual fields and crops. PLF is also a technology-enabled, data-driven approach to live- stock production management, which exploits technology to quantitatively measure the behavior, health and performance of animals. Big data delivered by a plethora of data sources related to these domains, has a multitude of payoffs including precision monitoring of fertilizer and fungicide levels to optimize crop yields, risk mitigation that results from monitoring when temperature and humidity levels reach dangerous levels for crops, increasing livestock production while minimizing the environmental footprint of livestock farming, ensuring high levels of welfare and health for animals, and more. By adding ana- lytics to these sensor and image data, opportunities also exist to further optimize PA and PLF by having continuous data on how a field or the livestock is responding to a protocol. For these domains, two main challenges exist: 1) to exploit this multitude of data facilitating dedicated improvements in performance, and 2) to make available advanced infrastructure so as to harness the power of this information in order to benefit from the new insights, practices and products, efficiently time-wise, lowering responsiveness down to seconds so as to cater for time-critical decisions. The current paper aims to introduce CYBELE, a platform aspiring to safeguard that the stakeholders involved in the agri-food value chain (research community, SMEs, entrepreneurs, etc.) have integrated, unmediated access to a vast amount of very large scale datasets of diverse types and coming from a variety of sources, and that they are capable of actually generating value and extracting insights out of these data, by providing secure and unmediated access to large-scale High Performance Computing (HPC) infrastructures supporting advanced data discovery, pro- cessing, combination and visualization services, solving computationally-intensive challenges modelled as mathematical algorithms requiring very high computing power and capability.
Keywords: Precision agriculture | Precision livestock farming | High performance computing | Big data analytics
مقاله انگلیسی
5 Adaptive request scheduling for the I/O forwarding layer using reinforcement learning
زمانبندی درخواست تطبیقی برای لایه انتقال ورودی و خروجی با استفاده از یادگیری تقویتی-2020
In this paper, we propose an approach to adapt the I/O forwarding layer of HPC systems to applications’ access patterns. I/O optimization techniques can improve performance for the access patterns they were designed to target, but they often decrease performance for others. Furthermore, these techniques usually depend on the precise tune of their parameters, which commonly falls back to the users. Instead, we propose to do it dynamically at runtime based on the I/O workload observed by the system. Our approach uses a reinforcement learning technique – contextual bandits – to make the system capable of learning the best parameter value to each observed access pattern during its execution. That eliminates the need of a complicated and time-consuming previous training phase. Our case study is the TWINS scheduling algorithm, where performance improvements depend on the time window parameter, which in turn depends on the workload. We evaluate our proposal and demonstrate it can reach a precision of 88% on the parameter selection in the first hundreds of observations of an access pattern, achieving 99% of the optimal performance. We demonstrate that the system – which is expected to live for years – will be able to adapt to changes and optimize its performance after having observed an access pattern for a few (not necessarily contiguous) minutes.
Keywords: High performance I/O | Parallel I/O | I/O scheduling | I/O forwarding | Reinforcement learning | Auto-tuning
مقاله انگلیسی
6 Programming languages for data-Intensive HPC applications: A systematic mapping study
زبان های برنامه نویسی برای برنامه های HPC با داده های فشرده: یک مطالعه نگاشت منظم-2020
A major challenge in modelling and simulation is the need to combine expertise in both software tech- nologies and a given scientific domain. When High-Performance Computing (HPC) is required to solve a scientific problem, software development becomes a problematic issue. Considering the complexity of the software for HPC, it is useful to identify programming languages that can be used to alleviate this issue. Because the existing literature on the topic of HPC is very dispersed, we performed a Systematic Mapping Study (SMS) in the context of the European COST Action cHiPSet. This literature study maps character- istics of various programming languages for data-intensive HPC applications, including category, typical user profiles, effectiveness, and type of articles. We organised the SMS in two phases. In the first phase, relevant articles are identified employing an automated keyword-based search in eight digital libraries. This lead to an initial sample of 420 papers, which was then narrowed down in a second phase by human inspection of article abstracts, titles and keywords to 152 relevant articles published in the period 2006–2018. The analysis of these articles en- abled us to identify 26 programming languages referred to in 33 of relevant articles. We compared the outcome of the mapping study with results of our questionnaire-based survey that involved 57 HPC ex- perts. The mapping study and the survey revealed that the desired features of programming languages for data-intensive HPC applications are portability, performance and usability. Furthermore, we observed that the majority of the programming languages used in the context of data-intensive HPC applications are text-based general-purpose programming languages. Typically these have a steep learning curve, which makes them difficult to adopt. We believe that the outcome of this study will inspire future research and development in programming languages for data-intensive HPC applications.
Keywords: High performance computing (HPC) | Big data | Data-intensive applications | Programming languages | Domain-Specific language (DSL) | General-Purpose language (GPL) | Systematic mapping study (SMS)
مقاله انگلیسی
7 Toward modeling and optimization of features selection in Big Data based social Internet of Things
به سوی مدل سازی و بهینه سازی انتخاب ویژگی ها در داده های بزرگ مبتنی بر اینترنت اشیا اجتماعی-2018
The growing gap between users and the Big Data analytics requires innovative tools that address the challenges faced by big data volume, variety, and velocity. Therefore, it becomes computationally inefficient to analyze and select features from such massive volume of data. Moreover, advancements in the field of Big Data application and data science poses additional challenges, where a selection of appropriate features and High-Performance Computing (HPC) solution has become a key issue and has attracted attention in recent years. Therefore, keeping in view the needs above, there is a requirement for a system that can efficiently select features and analyze a stream of Big Data within their requirements. Hence, this paper presents a system architecture that selects features by using Artificial Bee Colony (ABC). Moreover, a Kalman filter is used in Hadoop ecosystem that is used for removal of noise. Furthermore, traditional MapReduce with ABC is used that enhance the processing efficiency. Moreover, a complete four-tier architecture is also proposed that efficiently aggregate the data, eliminate unnecessary data, and analyze the data by the proposed Hadoop-based ABC algorithm. To check the efficiency of the proposed algorithms exploited in the proposed system architecture, we have implemented our proposed system using Hadoop and MapReduce with the ABC algorithm. ABC algorithm is used to select features, whereas, MapReduce is supported by a parallel algorithm that efficiently processes a huge volume of data sets. The system is implemented using MapReduce tool at the top of the Hadoop parallel nodes with near real time. Moreover, the proposed system is compared with Swarm approaches and is evaluated regarding efficiency, accuracy and throughput by using ten different data sets. The results show that the proposed system is more scalable and efficient in selecting features.
Keywords: SIoT ، Big Data ، ABC algorithm، Feature selection
مقاله انگلیسی
8 Improving the Effectiveness of Burst Buffers for Big Data Processing in HPC Systems with Eley
بهبود اثربخشی بافرها پشت سر هم برای پردازش داده های بزرگ در سیستم های HPC با Eley-2018
Burst Buffer is an effective solution for reducing the data transfer time and the I/O interference in HPC systems. Extending Burst Buffers (BBs) to handle Big Data applications is challenging because BBs must account for the large data inputs of Big Data applications and the Quality-of-Service (QoS) of HPC applications—which are considered as first-class citizens in HPC systems. Existing BBs focus on only intermediate data of Big Data applications and incur a high performance degradation of both Big Data and HPC applications. We present Eley, a burst buffer solution that helps to accelerate the performance of Big Data applications while guaranteeing the QoS of HPC applications. To achieve this goal, Eley embraces interference-aware prefetching technique that makes reading data input faster while introducing low interference for HPC applications. Evaluations using a wide range of Big Data and HPC applications demonstrate that Eley improves the performance of Big Data applications by up to 30% compared to existing BBs while maintaining the QoS of HPC applications.
Keywords: HPC ، MapReduce ، Big data ، Parallel file systems ، Burst buffers ، Interference ، Prefetch
مقاله انگلیسی
9 High-Performance Correlation and Mapping Engine for rapid generating brain connectivity networks from big fMRI data
موتور همبستگی و نقشه برداری با سرعت بالا برای ایجاد سریع شبکه های ارتباطی مغز از داده های بزرگ fMRI-2018
Brain connectivity networks help physicians better understand the neurological effects of certain diseases and make improved treatment options for patients. Seed-based Correlation Analysis (SCA) of Functional Magnetic Resonance Imaging (fMRI) data has been used to create the individual brain connectivity net works. However, an outstanding issue is the long processing time to generate full brain connectivity maps. With close to a million individual voxels in a typical fMRI dataset, the number of calculations involved in a voxel-by-voxel SCA becomes very high. With the emergence of the dynamic time-varying functional connectivity analysis, the population-based studies, and the studies relying on real-time neurological feedbacks, the need for rapid processing methods becomes even more critical. This work aims to develop a new method which produces high-resolution brain connectivity maps rapidly. The new method accel erates the correlation processing by using an architecture that includes clustered FPGAs and an efficient memory pipeline, which is termed High-Performance Correlation and Mapping Engine (HPCME). The method has been tested with datasets from the Human Connectome Project. The preliminary results show that HPCME with four FPGAs can improve the SCA processing speed by a factor of 27 or more over that of a PC workstation with a multicore CPU.
Keywords: Brain Functional Connectivity ، FMRI ، Seed-based Correlation Analysis ، FPGA-based Parallel Computing ، Human Connectome Project
مقاله انگلیسی
10 Scalable system scheduling for HPC and big data
برنامه ریزی مقیاس پذیر برای HPC و داده های بزرگ-2018
In the rapidly expanding field of parallel processing, job schedulers are the ‘‘operating systems’’ of modern big data architectures and supercomputing systems. Job schedulers allocate computing resources and control the execution of processes on those resources. Historically, job schedulers were the domain of supercomputers, and job schedulers were designed to run massive, long-running computations over days and weeks. More recently, big data workloads have created a need for a new class of computations consisting of many short computations taking seconds or minutes that process enormous quantities of data. For both supercomputers and big data systems, the efficiency of the job scheduler represents a fundamental limit on the efficiency of the system. Detailed measurement and modeling of the perfor mance of schedulers are critical for maximizing the performance of a large-scale computing system. This paper presents a detailed feature analysis of 15 supercomputing and big data schedulers. For big data workloads, the scheduler latency is the most important performance characteristic of the scheduler. A theoretical model of the latency of these schedulers is developed and used to design experiments targeted at measuring scheduler latency. Detailed benchmarking of four of the most popular schedulers (Slurm, Son of Grid Engine, Mesos, and Hadoop YARN) is conducted. The theoretical model is compared with data and demonstrates that scheduler performance can be characterized by two key parameters: the marginal latency of the scheduler ts and a nonlinear exponent αs. For all four schedulers, the utilization of the computing system decreases to <10% for computations lasting only a few seconds. Multi-level schedulers (such as LLMapReduce) that transparently aggregate short computations can improve utilization for these short computations to >90% for all four of the schedulers that were tested.
Keywords: Scheduler ، Resource manager ، Job scheduler ، High performance computing ، Data analytics
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 4412 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 4412 :::::::: افراد آنلاین: 72