Dynamic occupant density models of commercial buildings for urban energy simulation
مدلهای چگالی اشغال پویا ساختمانهای تجاری برای شبیه سازی انرژی شهری-2020
The number of occupants and its changing pattern over time are key information for building and urban energy simulation. However, the commonly used assumption and simplification of a fixed occupancy schedule does not reflect the complicated reality, leading to significant errors in energy simulation. Therefore, dynamic occupant density models which describe the real-world situation more accurately should be developed. This paper presents a methodology to develop such a model for commercial buildings and expand it from the building level to urban level. First, a total of 2275 commercial buildings in Nanjing, a major city in China, are identified and classified into three sub-categories using Points of Interest and logistic regression. Then field measurement is conducted to obtain the hourly occupant density for 12 sample commercial buildings. The building-level dynamic occupant density model is developed by fitting normal distribution functions into the measured data. Finally, transportation accessibility and population level, two urban parameters, are defined and used to expand the buildinglevel occupant density model to the urban-level one. The dynamic urban-level occupant density model is verified for all three sub-categories of commercial buildings and the overall results are acceptable.
Keywords: Big data | Commercial buildings | Urban-level | Dynamic occupant density models
Multi-objective scheduling of extreme data scientific workflows in Fog
زمانبندی چند هدفه گردش کار علمی علمی شدید در مه-2020
The concept of ‘‘extreme data’’ is a recent re-incarnation of the ‘‘big data’’ problem, which is distinguished by the massive amounts of information that must be analyzed with strict time requirements. In the past decade, the Cloud data centers have been envisioned as the essential computing architectures for enabling extreme data workflows. However, the Cloud data centers are often geographically distributed. Such geographical distribution increases offloading latency, making it unsuitable for processing of workflows with strict latency requirements, as the data transfer times could be very high. Fog computing emerged as a promising solution to this issue, as it allows partial workflow processing in lower-network layers. Performing data processing on the Fog significantly reduces data transfer latency, allowing to meet the workflows’ strict latency requirements. However, the Fog layer is highly heterogeneous and loosely connected, which affects reliability and response time of task offloading. In this work, we investigate the potential of Fog for scheduling of extreme data workflows with strict response time requirements. Moreover, we propose a novel Pareto-based approach for task offloading in Fog, called Multi-objective Workflow Offloading (MOWO). MOWO considers three optimization objectives, namely response time, reliability, and financial cost. We evaluate MOWO workflow scheduler on a set of real-world biomedical, meteorological and astronomy workflows representing examples of extreme data application with strict latency requirements.
Keywords: Scheduling | Scientific workflows | Fog computing | Task offloading | Monte-Carlo simulation | Multi-objective optimization
A security oriented transient-noise simulation methodology: Evaluation of intrinsic physical noise of cryptographic designs
روش شبیه سازی نویز گذرا امنیت گرا: ارزیابی صدای فیزیکی ذاتی طرح های رمزنگاری-2019
Noise in digital circuits has always been minimized to achieve high signal integrity, robust operation and of course high performance. However, for cryptographic applications, increased noise can in fact be beneficial. It can be used effectively to reduce the (cryptographic) Signal-to-Noise (SNR) ratio and to make it harder for an adversary to extract useful information (e.g., secret keys) from the side channel leakage data. A natural question concerns the extent to which intrinsic (internal) noise is required to improve security. In this manuscript, we explore this question and further introduce a methodology to exploit the intrinsic physical noise (i.e., flicker- and thermal-noise) at the secure circuit level. We additionally demonstrate how the values obtained from our methodology translate into relevant cryptographic metrics. Our simulations show that the calculated cryptographic noise values are in close agreement with the noise levels extracted from noisy distributions using transient noise analysis. We finally evaluate (with the proposed methodology) several meaningful parameters which affect the internal noise (and their security extent) such as transistors-sizing and voltage-supply changes.
A simulated approach to evaluate side-channel attack countermeasures for the Advanced Encryption Standard
یک روش شبیه سازی شده برای ارزیابی اقدامات متقابل حمله کانال جانبی برای استاندارد رمزگذاری پیشرفته-2019
Modern networks have critical security needs and a suitable level of protection and performance is usually achieved with the use of dedicated hardware cryptographic cores. Although the Advanced Encryption Standard (AES) is considered the best approach when symmetric cryptography is required, one of its main weaknesses lies in its measurable power consumption. Side-Channel Attacks (SCAs) use this emitted power to analyse and revert the mathematical steps and extract the encryption key. Nowadays they exist several dedicated equipment and workstations for SCA weaknesses analysis and the evaluation of the related countermeasures, but they can present significant drawbacks as a high cost for the instrumentation or, in case of cheaper instrumentation, the need to underclock the physical circuit implementing the AES cipher, in order to adapt the circuit clock frequency accordingly to the power sampling rate of ADCs or oscilloscopes bandwidth. In this work, we proposed a methodology for Correlation and Differential Power Analysis against hardware implementations of an AES core, relying only on a simulative approach. Our solution extracts simulated power traces from a gate-level netlist and then elaborates them using mathematical-statistical procedures. The main advantage of our solution is that it allows to emulate a real attack scenario based on emitted power analysis, without requiring any additional physical circuit or dedicated equipment for power samples acquisition, neither modifying the working conditions of the target application context (such as the circuit clock frequency). Thus, our approach can be used to validate and benchmark any SCA countermeasure during an early step of the design, thereby shortening and helping the designers to find the best solution during a preliminary phase and potentially without additional costs.
Automated vehicle’s behavior decision making using deep reinforcement learning and high-fidelity simulation environment
تصمیم گیری خودکار وسیله نقلیه با استفاده از یادگیری تقویتی عمیق و محیط شبیه سازی با وفاداری بالا-2019
Automated vehicles (AVs) are deemed to be the key element for the intelligent transportation system in the future. Many studies have been made to improve AVs’ ability of environment recognition and vehicle control, while the attention paid to decision making is not enough and the existing decision algorithms are very preliminary. Therefore, a framework of the decisionmaking training and learning is put forward in this paper. It consists of two parts: the deep reinforcement learning (DRL) training program and the high-fidelity virtual simulation environment. Then the basic microscopic behavior, car-following (CF), is trained within this framework. In addition, theoretical analysis and experiments were conducted to evaluate the proposed reward functions for accelerating training using DRL. The results show that on the premise of driving comfort, the efficiency of the trained AV increases 7.9% and 3.8% respectively compared to the classical adaptive cruise control models, intelligent driver model and constant-time headway policy. Moreover, on a more complex three-lane section, we trained an integrated model combining both CF and lane-changing behavior, with the average speed further growing 2.4%. It indicates that our framework is effective for AV’s decision-making learning.
Keywords: Automated vehicle | Decision making | Deep reinforcement learning | Reward function
An efficient simulation optimization methodology to solve a multi-objective problem in unreliable unbalanced production lines
یک روش بهینه سازی شبیه سازی کارآمد برای حل یک مشکل چند هدف در خطوط تولید نامتوازن غیرقابل اعتماد-2019
This research develops an expert system to addresses a novel problem in the literature of buffer allo- cation and production lines. We investigate real-world unreliable unbalanced production lines where all time-based parameters are probabilistic including time between parts arrivals, processing times, time be- tween failures, repairing times, and setup times. The main contributions of the paper are a twofold. First and foremost, the mean processing times of workstations and buffer capacities, unlike the existing litera- ture, are considered as decision variables in a multi-objective optimization problem which maximizes the throughput rate and minimizes the total buffer capacities as well as the total cost of the mean process time reductions. Secondly, an efficient methodology is developed that can precisely reflect a real-world system without any unrealistic and/or restrictive assumptions on the probabilistic nature of the system, which are commonly assumed in the existing literature. One of the greatest challenges in this research is to estimate the throughput rate function since it highly depends on the random behavior of the sys- tem. Thus, a simulation optimization approach is developed based on the Design of Experiments and Re- sponse Surface Methodology to fit a regression model for throughput rate. Finally, Non-dominated Sorting Genetic Algorithm (NSGA-II) and Non-dominated Ranked Genetic Algorithm (NRGA) are used to gener- ate high-quality solutions for the aforementioned problem. This methodology is run on a real numerical case. The experimental results confirm the advantages of the proposed methodology. This methodology is an innovative expert system with a knowledge-base developed through this simulation optimization approach. This expert system can be applied to complex production line problems in large or small scale with different types of decision variables and objective functions. The application of this expert system is transformative to other manufacturing systems.
Keywords: Unreliable unbalanced production lines | Buffer allocation problem | Simulation optimization | Design of experiments | Response surface methodology | Meta-heuristics
روش ردیابی خودرو بهبود یافته برای IEEE 802:11p
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 5 - تعداد صفحات فایل doc فارسی: 10
توسعه روش های موقعیت یابی با قابلیت تحرک- بالا با استفاده از استاندارد IEEE 802.11p در شبکه های ادهاک وسایل نقلیه (VANETs) به دلیل نقاط ضعف در ناحیه های GNSS-dark مانند جنگل ها، تونل و غیره، و اشتباهات ناشی از GNSS-dark در نتایج ، ضروری است. برآورد زمان دقیق رسیدن(TOA) مبتنی بر مدل مسافت یابی ، به عنوان یکی از چالش های سیستم پیشگیری از برخورد اتومبیل ها، توجه زیادی را به خود جلب است. در این مقاله، روش پیشنهادی TOA یا روش تخمین مسافت با راهنمای کوتاه IEEE 802.11p پیشنهاد شد تا اثربخشی اندازه گیری های وسایل نقلیه چندکاره و نسبت نویز سیگنال کم (SNR) را کاهش دهد. ابتدا، TOA با استفاده از همبستگی خودکار و همبستگی-متقاطع برآورد شد. سپس، رویکرد sum برای یافتن مبدا دقیق زمان ارائه شد. نتایج شبیه سازی در کانال اتحادیه بین المللی مخابرات خودرو (ITUA) و کانال نویز گاوسی سفید افزایشی (AWGN)، برتری الگوریتم پیشنهادی را در شرایط SNR کم و محیط چندکاره ثابت می کند.
کليدواژه: برآورد TOA | IEEE 802.11p | VANETS | دامنه | همبستگی خودکار | همبستگی- متقابل
|مقاله ترجمه شده|
تحلیل لبه ای مبتنی بر موجک چند جهته برای تشخیص سطح توسط پروفیلومتری نوری
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 10 - تعداد صفحات فایل doc فارسی: 18
دانشمندان، مهندسان و تولید کنندگان نیاز ضروری به تکنیک های بهتر تشخیص و کنترل کیفیت دارند. مترولوژی نوری با استفاده از علوم نور و علوم کامپیوتر به دنبال شبیه سازی، طراحی، محاسبات و بازرسی برای بسیاری از برنامه های کاربردی علمی و صنعتی مانند اپتیک، مکانیک، هواپیما، الکترونیک و … است. آنالیز الگوی fringe روشی برای انجام برخی عملیات در تصاویر نوری و به منظور دریافت نقشه فاز اینترفرومتری و سپس استخراج برخی اطلاعات مفید از آن است. در این مقاله، بهبود محرک الگوریتم دمدولاسیون fringe محلی ارائه شده است، که بر اساس موجک جدید چند جهته است. کارهای عددی و تجربی در مقایسه با سایر الگوریتم های استاندارد، سود جالبی را نشان می دهد. رویکرد ما به سرعت به عنوان فاز روش های بازیابی پرطرفدار اجرا می شود، اما با دقت قابل توجهی دمدولاسیون fringe های نویز را بهبود می دهد. همه این مسائل بدون هیچ پیش پردازش توسط فیلتر کردن مدل ها رخ می دهد.
کليدواژه ها: تصویربرداری نوری | علوم کامپیوتر | پردازش تصویر | موجک چند جهته | فاز بازیابی | طرح ریزی fringe .
|مقاله ترجمه شده|
Artificial Intelligence in Medical Education: Best Practices Using Machine Learning to Assess Surgical Expertise in Virtual Reality Simulation
هوش مصنوعی در آموزش پزشکی: بهترین روش هایی که با استفاده از یادگیری ماشینی برای ارزیابی تخصص جراحی در شبیه سازی واقعیت مجازی انجام می شود-2019
OBJECTIVE: Virtual reality simulators track all movements and forces of simulated instruments, generating enormous datasets which can be further analyzed with machine learning algorithms. These advancements may increase the understanding, assessment and training of psychomotor performance. Consequently, the application of machine learning techniques to evaluate performance on virtual reality simulators has led to an increase in the volume and complexity of publications which bridge the fields of computer science, medicine, and education. Although all disciplines stand to gain from research in this field, important differences in reporting exist, limiting interdisciplinary communication and knowledge transfer. Thus, our objective was to develop a checklist to provide a general framework when reporting or analyzing studies involving virtual reality surgical simulation and machine learning algorithms. By including a total score as well as clear subsections of the checklist, authors and reviewers can both easily assess the overall quality and specific deficiencies of a manuscript. DESIGN: The Machine Learning to Assess Surgical Expertise (MLASE) checklist was developed to help computer science, medicine, and education researchers ensure quality when producing and reviewing virtual reality manuscripts involving machine learning to assess surgical expertise. SETTING: This study was carried out at the McGill Neurosurgical Simulation and Artificial Intelligence Learning Centre. PARTICIPANTS: The authors applied the checklist to 12 articles using machine learning to assess surgical expertise in virtual reality simulation, obtained through a systematic literature review. RESULTS: Important differences in reporting were found between medical and computer science journals. The medical journals proved stronger in discussion quality and weaker in areas related to study design. The opposite trends were observed in computer science journals. CONCLUSIONS: This checklist will aid in narrowing the knowledge divide between computer science, medicine, and education: helping facilitate the burgeoning field of machine learning assisted surgical education. ( J Surg Ed 000:110. 2019 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.)
KEY WORDS: simulation| surgery | education | artificial intelligence | assessment | machine learning
Towards an integrated machine-learning framework for model evaluation and uncertainty quantification
به سمت یک چارچوب یکپارچه یادگیری ماشین برای ارزیابی مدل و کمیت عدم اطمینان-2019
We introduce a new paradigm for treating and exploiting simulation data, serving in parallel as an alternative workflow for model evaluation and uncertainty quantification. Instead of reporting simulations of base-case and specific variations scenarios, databases covering a wide spectrum of operational conditions are built by means of machine-learning using sophisticated mathematical algorithms. While the approach works for all sorts of computer-aided engineering applications, the present contribution addresses the CFD/CMFD sub-branch, with application to a widely used benchmark of convective flow boiling. In addition to comparing simulation and experimental results on a case-by-case basis, machine-learning is used to create their respective (CFD and experiment) data-driven models (DDM), which will in a later stage serve for assessing the predictive performance of the CFD models over a wider range of experimental conditions, hence providing a high-level classification of their range of applicability.
Keywords: Fluid flow simulation | Wall boiling | Data analytics | Digital Twin | Machine-learning | Data-driven models (DDM)