دانلود و نمایش مقالات مرتبط با انبار داده::صفحه 1
دانلود بهترین مقالات isi همراه با ترجمه فارسی 2

با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد). 

نتیجه جستجو - انبار داده

تعداد مقالات یافته شده: 18
ردیف عنوان نوع
1 Knowledge Management Process for Air Quality Systems based on Data Warehouse Specification
فرآیند مدیریت دانش برای سیستم های کیفیت هوا بر اساس مشخصات انبار داده-2021
Even though several systems for Air Quality (AQ) monitoring have been in existence for over a decade, a research model for Knowledge Management (KM) of AQ data has to be created in order to enhance the decision-making and organize the air quality data collected from the Internet of Things (IoT) consumer devices. This model should be made more performant by ensuring greater flexibility and interoperability between devices and emerging technologies. In this context, we propose an approach for representing Data WareHouse (DWH) schema based on an ontology that captures the multidimensional knowledge of tools, techniques, and technologies used for novel AQ systems. This enhances decision-making by coping with potential problems such as data sources heterogeneity and covering the various phases of the decision-making life cycle.
Keywords: Knowledge Management | Air Quality | Data Warehouse | Conceptual Data Model | Multidimensional Design | Ontology.
مقاله انگلیسی
2 Using big data database to construct new GFuzzy text mining and decision algorithm for targeting and classifying customers
استفاده از بانک اطلاعاتی داده های بزرگ برای ساخت الگوریتم تصمیم گیری متن کاوی GFuzzy برای هدف قرار دادن و طبقه بندی مشتریان-2019
After an enterprise builds a data warehouse, it can record information related to customer interactions using structured and unstructured data. The intention is to convert these data into useful information for decisionmaking to ensure business continuity. Hence, this study proposes a new Chinese text classification model for the project management office (PMO) using fuzzy semantics and text mining techniques. First, content analysis is performed on the unstructured data to convert important textual information and compile it into a keyword index. Next, a classification and decision algorithm for grey situations and fuzzy (GFuzzy) is used to categorize textual data by three characteristics: maximum impact, moderate impact, and minimum impact. The purpose is to analyze consumer behaviors for the accurate classification of customers. Lastly, a more effective marketing strategy is formulated to target the various customer combinations, growth models, and the best mode of service. A company database of interactions with customers is used to construct a text mining model and to analyze the decision process of its PMO. The purpose is to test the feasibility and validity of the proposed model so that enterprises are provided with better marketing strategies and PMO processes aimed at their customers
Keywords: Big data warehouse | Content analysis | Data mining | Fuzzy grey situation decision-making algorithm | Project management office | Customer relations management
مقاله انگلیسی
3 Leveraging hospital big data to monitor flu epidemics
استفاده از داده های بزرگ بیمارستان برای کنترل اپیدمی های آنفولانزا-2018
Background and Objective: Influenza epidemics are a major public health concern and require a costly and time-consuming surveillance system at different geographical scales. The main challenge is being able to predict epidemics. Besides traditional surveillance systems, such as the French Sentinel network, several studies proposed prediction models based on internet-user activity. Here, we assessed the potential of hospital big data to monitor influenza epidemics. Methods: We used the clinical data warehouse of the Academic Hospital of Rennes (France) and then built different queries to retrieve relevant information from electronic health records to gather weekly influenza-like illness activity. Results: We found that the query most highly correlated with Sentinel network estimates was based on emergency reports concerning discharged patients with a final diagnosis of influenza (Pearson’s correla tion coefficient (PCC) of 0.931). The other tested queries were based on structured data (ICD-10 codes of influenza in Diagnosis-related Groups, and influenza PCR tests) and performed best (PCC of 0.981 and 0.953, respectively) during the flu season 2014–15. This suggests that both ICD-10 codes and PCR re sults are associated with severe epidemics. Finally, our approach allowed us to obtain additional patients’ characteristics, such as the sex ratio or age groups, comparable with those from the Sentinel network. Conclusions: Conclusions: Hospital big data seem to have a great potential for monitoring influenza epi demics in near real-time. Such a method could constitute a complementary tool to standard surveillance systems by providing additional characteristics on the concerned population or by providing information earlier. This system could also be easily extended to other diseases with possible activity changes. Ad ditional work is needed to assess the real efficacy of predictive models based on hospital big data to predict flu epidemics.
Keywords: Health big data ، Clinical data warehouse ، Information retrieval system ، Health Information Systems ، Influenza ، Sentinel surveillance
مقاله انگلیسی
4 On big data-guided upstream business research and its knowledge management
تحقیق تجاری در زمینه داده های بزرگ و مدیریت دانش آن-2018
The emerging Big Data integration imposes diverse challenges, compromising the sustainable business research practice. Heterogeneity, multi-dimensionality, velocity, and massive volumes that challenge Big Data paradigm may preclude the effective data and system integration processes. Business alignments get affected within and across joint ventures as enterprises attempt to adapt to changes in industrial environments rapidly. In the context of the Oil and Gas industry, we design integrated artefacts for a resilient multidimensional warehouse repository. With access to several decades of resource data in upstream companies, we incorporate knowledge-based data models with spatial-temporal dimensions in data schemas to minimize ambiguity in warehouse repository im plementation. The design considerations ensure uniqueness and monotonic properties of dimensions, main taining the connectivity between artefacts and achieving the business alignments. The multidimensional attri butes envisage Big Data analysts a scope of business research with valuable new knowledge for decision support systems and adding further business values in geographic scales.
Keywords: Upstream business ، Heterogeneous and multidimensional data ، Data warehousing and mining ، Big Data paradigm ، Spatial-temporal dimensions
مقاله انگلیسی
5 Using big data database to construct new GFuzzy text mining and decision algorithm for targeting and classifying customers
استفاده از پایگاه داده داده های بزرگ برای ساختن متن کاوی و الگوریتم تصمیم گیری GFuzzy جدید برای هدف گیری و طبقه بندی مشتریان-2018
After an enterprise builds a data warehouse, it can record information related to customer interactions using structured and unstructured data. The intention is to convert these data into useful information for decision making to ensure business continuity. Hence, this study proposes a new Chinese text classification model for the project management office (PMO) using fuzzy semantics and text mining techniques. First, content analysis is performed on the unstructured data to convert important textual information and compile it into a keyword index. Next, a classification and decision algorithm for grey situations and fuzzy (GFuzzy) is used to categorize textual data by three characteristics: maximum impact, moderate impact, and minimum impact. The purpose is to analyze consumer behaviors for the accurate classification of customers. Lastly, a more effective marketing strategy is formulated to target the various customer combinations, growth models, and the best mode of service. A company database of interactions with customers is used to construct a text mining model and to analyze the decision process of its PMO. The purpose is to test the feasibility and validity of the proposed model so that enterprises are provided with better marketing strategies and PMO processes aimed at their customers.
Keywords: Big data warehouse ، Content analysis ، Data mining ، Fuzzy grey situation, decision-making algorithm ، Project management office ، Customer relations management
مقاله انگلیسی
6 Using big data database to construct new GFuzzy text mining and decision algorithm for targeting and classifying customers
استفاده از پایگاه داده داده های بزرگ برای ساختن الگوریتم جدید متن کاوی GFuzzy و الگوریتم تصمیم برای هدف گیری و طبقه بندی مشتریان-2018
After an enterprise builds a data warehouse, it can record information related to customer interactions using structured and unstructured data. The intention is to convert these data into useful information for decision making to ensure business continuity. Hence, this study proposes a new Chinese text classification model for the project management office (PMO) using fuzzy semantics and text mining techniques. First, content analysis is performed on the unstructured data to convert important textual information and compile it into a keyword index. Next, a classification and decision algorithm for grey situations and fuzzy (GFuzzy) is used to categorize textual data by three characteristics: maximum impact, moderate impact, and minimum impact. The purpose is to analyze consumer behaviors for the accurate classification of customers. Lastly, a more effective marketing strategy is formulated to target the various customer combinations, growth models, and the best mode of service. A company database of interactions with customers is used to construct a text mining model and to analyze the decision process of its PMO. The purpose is to test the feasibility and validity of the proposed model so that enterprises are provided with better marketing strategies and PMO processes aimed at their customers.
Keywords: Big data warehouse ، Content analysis ، Data mining ، Fuzzy grey situation decision-making algorithm ، Project management office ، Customer relations management
مقاله انگلیسی
7 نرمال سازی داده های بزرگ برای پایگاه داده های پردازش موازی انبوه
سال انتشار: 2017 - تعداد صفحات فایل pdf انگلیسی: 8 - تعداد صفحات فایل doc فارسی: 35
در پایگاه داده هایی که به پردازش موازی داده های انبوه یا به اختصار MPP می پردازند، معمولاً پرس و جو با عملکرد بالا و پرس وجوی اَدهاک به عنوان هدف های دو به دو ناسازگار در نظر گرفته می شوند. همچنین در این نوع از پایگاه دادده ها، میان سهولت توسعه ی مدل داده و سهولت تجزیه و تحلیل نیز تضاد وجود دارد. رویکرد جدیدی که "دریاچه داده " نام دارد، اینگونه وعده می دهد که با افزودن داده های جدید به مدل، توسعه ی مدل داده ساده تر خواهد شد، در حالیکه این دریاچه بسیار مستعد است که در نهایت تبدیل به باتلاقی بدون ساختار از داده ها شود؛ با توجه به عدم رعایت موازین و استانداردها، دریاچه ی داده از کنترل خارج می شود، یافتن داده ها و همچنین استفاده از داده ها، دشوار خواهد شد و بدین ترتیب دیگر داده ها قابل استفاده نخواهند بود. در این مقاله، تکنیک جدیدی معرفی می شود که با استفاده از مدلسازی لنگر داده های بزرگ را بسیار نرمال می کند؛ با استفاده از این تکنیک برای ذخیره ی اطلاعات و استفاده از منابع، روش بسیار مؤثری ارائه می شود، در نتیجه برای اولین بار در پایگاه داده هایی که به پردازش موازی داده های انبوه می-پردازند، پرس وجوی اَدهاک با کارایی بالا ارائه می شود (در این متن، منظور از پرس و جو، کوئری می باشد). این تکنیک برای توسعه ی مدل داده و تبدیل آن به دریاچه ی داده، روش تقریباً مناسبی است، این در حالی است که مدل، به صورت داخلی در برابر تبدیل شدن به دریاچه داده محافظت می شود. در اینجا یک مطالعه ی موردی نیز انجام شده است، این مطالعه به این مسأله می پردازد که چگونه این روش به مدت بیش از سه سال از انبارداده ا ی موجود در آویتو استفاده کرده است (آویتو یک وب سایت روسی است)؛ همچنین نتایج آزمایشاتی که با استفاده از داده های واقعی در HP Vertica انجام شده اند، نیز ارائه می شود. این مقاله براساس نتایج بدست آمده از یک پایان نامه گردآوری شده و در 34 اُمین کنفرانس بین المللی مدلسازی مفهومی در سال 2015 ارائه شده است ]1[؛ این مقاله با استفاده از نتایج عددی که در طی چندین سال (1 تا 3 سال) از نرمالسازی داده های بزرگ موجود در نواحی کلیدی انبار داده، بدست آمده است، تکمیل می شود. همچنین در اینجا به توصیف محدودیت ها نیز پرداخته می شود؛ این محدودیت ها به علت استفاده از تنها یک خوشه از پایگاه داده ی MPP ایجاد می شوند.
کلمات کلیدی: داده های بزرگ | MPP | پایگاه داده | نرمال سازی | تجزیه و تحلیل | اَدهاک | پرس وجو | مدل سازی | عملکرد | دریاچه داده.
مقاله ترجمه شده
8 Expediting analytical databases with columnar approach
Expediting analytical databases with columnar approach-2017
The approaches and discussions given in this paper offer applicable solutions for a number of scenarios taking place in the contemporary world that are dealing with performance issues in development and use of analytical databases for the support of both tactical and strategic decision making. The paper introduces a novel method for expediting the development and use of analytical databases that combines columnar database technology with an approach based on denormalizing data tables for analysis and decision support. This method improves the fea sibility and quality of tactical decision making by making critical information more readily available. It also im proves the quality of longer term strategic decision making by widening the range of feasible queries against the vast amounts of available information. The advantages include the improvements in the performance of the ETL process (the most common time-consuming bottleneck in most implementations of data warehousing for quality decision support) and in the performance of the individual analytical queries. These improvements in the critical decision support infrastructure are achieved without resulting in insurmountable storage-size in crease requirements. The efficiencies and advantages of the introduced approach are illustrated by showing the application in two relevant real-world cases.
Keywords: Data warehouses | Decision support | Big data | Performance | ETL | Columnar databases
مقاله انگلیسی
9 Exploring the Information Asymmetric factors of Pledges of Warehouse Certificates Based on Internet of Things
بررسی عوامل نامتقارن اطلاعات تعهد گواهی انبار داده ها بر اساس اینترنت اشیاء -2017
In internet plus era, the establishment of information platform is the way to solve the problem of asymmetric way. However Internet plus should not focus on the Internet as a tool to solve the problem, but on the nature of the problem to be solved. The nature of information sharing platform of pledges of warehouse certificates is to strengthen supervision and improve management. Study on the influence factors to construct the system information is the premise of constructing information sharing platform. The paper researches the factors that cause asymmetry information in the process of pledges of warehouse certificates business with the use of exploratory factor analysis on collecting questionnaires which was designed about asymmetry information of pledges of warehouse certificates based on the literature analysis and expert investigation. The research shows, there are four kinds of factors which cause information asymmetry in the process of pledges of warehouse certificates business. They are about the pledged goods, financing of enterprises, administrative agencies, and other warehouse receipt pledge business social agencies.
مقاله انگلیسی
10 Efficient Data Access and Performance Improvement Model for Virtual Data Warehouse
دسترسی به داده های کارآمد و مدلی برای بهبود عملکرد داده انبار مجازی-2017
This paper presents a model for improving query performance in Virtual Data Warehouse (VDW) by simulating VDW environment on a cellular phone billing and customer care system which involves processing millions of Call Detail Records (CDRs) generated by thousands of counters across the country. Processing aggregations on millions of CDRs requires expensive systems, especially when analysing customers’ traffic trends and encompasses several performance optimization techniques used for improvement of query performance in VDW. In this regard, VDW offers several advantages such as real-time analytic reports, reduced maintenance, low cost solution and flexible data integration, but performance is still one of its critical shortcomings. This paper enhances performance of VDW by using techniques like partitioned materialized views, index performance optimization, query rewrite in materialized views, analytic functions, sub-queries and enabling parallel execution etc. The study uses Oracle 10g as a backend database; Oracle management console and SQL query analyser are used for monitoring performance concerns during validation of VDW model; standard PL/SQL developer is used for extracting and loading test data; and finally, Hyperion Development suite is used for testing time comparisons of datasets both in normal OLTP and simulated VDW environments.
Keywords: Business Intelligence | OLTP | Performance Optimization | Query Optimization | Virtual Data Warehouse
مقاله انگلیسی
rss مقالات ترجمه شده rss مقالات انگلیسی rss کتاب های انگلیسی rss مقالات آموزشی
logo-samandehi
بازدید امروز: 1536 :::::::: بازدید دیروز: 0 :::::::: بازدید کل: 1536 :::::::: افراد آنلاین: 6