با سلام خدمت کاربران در صورتی که با خطای سیستم پرداخت بانکی مواجه شدید از طریق کارت به کارت (6037997535328901 بانک ملی ناصر خنجری ) مقاله خود را دریافت کنید (تا مشکل رفع گردد).
ردیف | عنوان | نوع |
---|---|---|
1 |
Ontological Approach for Semantic Modelling of Malay Translated Qur’an
رویکرد هستیشناختی برای مدلسازی معنایی قرآن ترجمهشده مالایی-2022 This thesis contributes to the areas of ontology development and analysis, natural
language processing (NLP), Information Retrieval (IR) and Language Resource
and Corpus Development.
Research in Natural Language Processing and semantic search for English has
shown successful results for more than a decade. However, it is difficult to adapt
those techniques to the Malay language, because its complex morphology and orthographic forms are very different from English. Moreover, limited resources and
tools for computational linguistic analysis are available for Malay. In this thesis,
we address those issues and challenges by proposing MyQOS,the Malay Qur’an
Ontology System, a prototype ontology-based IR with semantics for representing
and accessing a Malay translation of the Qur’an. This supports the development
of a semantic search engine and a question answering system and provides a framework for storing and accessing a Malay language corpus and providing computational linguistics resources. The primary use of MyQOS in the current research
is for creating and improving the quality and accuracy of the query mechanism
to retrieve information embedded in the Malay text of the Qur’an translation.
To demonstrate the feasibility of this approach, we describe a new architecture
of morphological analysis for MyQOS and query algorithms based on MyQOS.
Data analysis that consisted of two measures; precision and recall, where data
was obtained from MyQOS Corpus conducted in three search engines. The precision and recall for semantic search are 0.8409 (84%) and 0.8043(80%), double
the results of the question answer search which are 0.4971(50%) for precision and
0.6027 (60%) for recall. The semantic search gives high precision and high recall
comparing the other two methods. This indicates that semantic search returns
more relevant results than irrelevant ones. To conclude, this research is among
research in the retrieval of the Qur’an texts in the Malay language that managed
to outline state-of-the-art information retrieval system models. Thus, the use of
MyQOS will help Malay readers to understand the Qur’an in better ways. Furthermore, the creation of a Malay language corpus and computational linguistics
resources will benefit other researchers, especially in religious texts, morphological
analysis and semantic modelling. |
مقاله انگلیسی |
2 |
A verb-frame frequency account of constraints on long-distance dependencies in English
یک حساب فرکانس فعل از محدودیتهای وابستگیهای فاصلهای طولانی در زبان انگلیسی-2021 Going back to Ross (1967) and Chomsky (1973), researchers have sought to understand what conditions permit
long-distance dependencies in language, such as between the wh-word what and the verb bought in the sentence
‘What did John think that Mary bought?’. In the present work, we attempt to understand why changing the main
verb in wh-questions affects the acceptability of long-distance dependencies out of embedded clauses. In
particular, it has been claimed that factive and manner-of-speaking verbs block such dependencies (e.g., ‘What
did John know/whisper that Mary bought?’), whereas verbs like think and believe allow them. Here we provide 3
acceptability judgment experiments of filler-gap constructions across embedded clauses to evaluate four types of
accounts based on (1) discourse; (2) syntax; (3) semantics; and (4) our proposal related to verb-frame frequency.
The patterns of acceptability are most simply explained by two factors: verb-frame frequency, such that de-
pendencies with verbs that rarely take embedded clauses are less acceptable; and construction type, such that
wh-questions and clefts are less acceptable than declaratives. We conclude that the low acceptability of filler-gap
constructions formed by certain sentence complement verbs is due to infrequent linguistic exposure. keywords: پردازش حکم | اثرات فرکانس | وابستگی های راه دور | جزایر نحوی | Sentence processing | Frequency effects | Long-distance dependencies | Syntactic islands |
مقاله انگلیسی |
3 |
TITAN: A knowledge-based platform for Big Data workflow management
TITAN: یک پلت فرم مبتنی بر دانش برای مدیریت گردش کار داده های بزرگ-2021 Modern applications of Big Data are transcending from being scalable solutions of data processing
and analysis, to now provide advanced functionalities with the ability to exploit and understand the
underpinning knowledge. This change is promoting the development of tools in the intersection of data
processing, data analysis, knowledge extraction and management. In this paper, we propose TITAN, a
software platform for managing all the life cycle of science workflows from deployment to execution
in the context of Big Data applications. This platform is characterised by a design and operation mode
driven by semantics at different levels: data sources, problem domain and workflow components. The
proposed platform is developed upon an ontological framework of meta-data consistently managing
processes and models and taking advantage of domain knowledge. TITAN comprises a well-grounded
stack of Big Data technologies including Apache Kafka for inter-component communication, Apache
Avro for data serialisation and Apache Spark for data analytics. A series of use cases are conducted for
validation, which comprises workflow composition and semantic meta-data management in academic
and real-world fields of human activity recognition and land use monitoring from satellite images./
keywords: تجزیه و تحلیل داده های بزرگ | مفاهیم | استخراج دانش | Big Data analytics | Semantics | Knowledge extraction |
مقاله انگلیسی |
4 |
An intelligent semantic system for real-time demand response management of a thermal grid
یک سیستم معنایی هوشمند برای مدیریت پاسخ به تقاضای زمان واقعی یک شبکه حرارتی-2020 “Demand Response” energy management of thermal grids requires consideration of a wide range of factors at
building and district level, supported by continuously calibrated simulation models that reflect real operation
conditions. Moreover, cross-domain data interoperability between concepts used by the numerous hardware and
software is essential, in terms of Terminology, Metadata, Meaning and Logic. This paper leverages domain
ontology to map and align the semantic resources that underpin building and district energy management, with a
focus on the optimization of a thermal grid informed by real-time energy demand. The intelligence of the system
is derived from simulation-based optimization, informed by calibrated thermal models that predict the network’s
energy demand to inform (near) real-time generation. The paper demonstrates that the use of semantics helps
alleviate the endemic energy performance gap, as validated in a real district heating network where 36% reduction
on operation cost and 43% reduction on CO2 emission were observed compared to baseline operational
data. Keywords: Thermal grid | Demand response | Energy optimization | Operation cost | Data interoperability | Semantic ontology |
مقاله انگلیسی |
5 |
Indoor location identification of patients for directing virtual care: An AI approach using machine learning and knowledge-based methods
شناسایی موقعیت داخلی بیماران برای هدایت مراقبت های مجازی: رویکرد هوش مصنوعی با استفاده از یادگیری ماشین و روش های دانش بنیان-2020 In a digitally enabled healthcare setting, we posit that an individual’s current location is pivotal for supporting
many virtual care services—such as tailoring educational content towards an individual’s current location, and,
hence, current stage in an acute care process; improving activity recognition for supporting self-management in a
home-based setting; and guiding individuals with cognitive decline through daily activities in their home.
However, unobtrusively estimating an individual’s indoor location in real-world care settings is still a challenging
problem. Moreover, the needs of location-specific care interventions go beyond absolute coordinates and
require the individual’s discrete semantic location; i.e., it is the concrete type of an individual’s location (e.g., exam
vs. waiting room; bathroom vs. kitchen) that will drive the tailoring of educational content or recognition of
activities. We utilized Machine Learning methods to accurately identify an individual’s discrete location, together
with knowledge-based models and tools to supply the associated semantics of identified locations. We
considered clustering solutions to improve localization accuracy at the expense of granularity; and investigate
sensor fusion-based heuristics to rule out false location estimates. We present an AI-driven indoor localization
approach that integrates both data-driven and knowledge-based processes and artifacts. We illustrate the application
of our approach in two compelling healthcare use cases, and empirically validated our localization
approach at the emergency unit of a large Canadian pediatric hospital. Keywords: Virtual care | Ambient sensors | Indoor localization | Machine learning | Semantic web | eHealth platform | Data fusion | Self-management | Ambient assisted living | Activities of daily living |
مقاله انگلیسی |
6 |
Integrative systematic review meta-analysis and bioinformatics identifies MicroRNA-21 and its target genes as biomarkers for colorectal adenocarcinoma
متاآنالیز سیستماتیک یکپارچه و تجزیه و تحلیل بیوانفورماتیک شناسایی MicroRNA-21 و ژن های هدف آن به عنوان نشانگرهای تجاری برای آدنوکارسینوم روده بزرگ-2020 Background: Advanced colorectal has poor survival and are difficult to treat. Therefore, there is an urgent need
for biomarkers to diagnose this cancer at earlier manageable stages. Micro-RNAs (miRNAs) are amongst the most
significant biomarkers that have shown promise in improving management and early detection of different types
of cancers. However, since MiRNAs are non-coding, the main limitation of using them as biomarkers is that they
do not have associated phenotype and therefore difficult to validate using other techniques. This makes it difficult
to understand the mechanism of miRNA is disease initiation and progression, therefore any methodology
that can provide semantics to miRNA expression would enhance the understanding of the role of miRNA in
disease.
Methods: Here we report an integrative meta-analysis and bioinformatics methodology that showed microRNA-
21 and its associated target mRNA to be the most significant predictive biomarkers for colorectal adenoma and
adenocarcinoma. After drawing key inferences by meta-analysis, the authors then developed a bioinformatics
method to identify mir-21 gene targeting in a specific tissue using two different bioinformatics approaches;
absolute GSEA (Gene Set Enrichment Analysis) and LIMMA (Linear Models for MicroArray data) to identify
differentially expressed genes of miRNA-21.
Results: Results from GSEA intersection with mir-21 gene targets was a subset of longer gene list that was
obtained from the GEO2R intersect. In our study, both of longer GEO2R gene target list and the more focused
GSEA list established the fact that mir-21 target numerous functional pathways that are mostly interconnected.
Our three steps bioinformatics approach identified ABCB1, HPGD, BCL2, TIAM1, TLR3, and PDCD4 as common
targets for mir-21 in both of adenoma as well as adenocarcinoma suggesting they are biomarkers for early CRC.
Conclusions: The approach in this study proposed combining the big data from the scientific literature together
with novel bioinformatics to bring about a methodology that can be used to first identify which microRNAs are
involved in a specific disease, and then to identify a panel of biomarkers derived from the microRNAs target
genes, and from these target genes the functional significance of these microRNAs can be inferred providing
better clinical value for the surgeon Keywords: Tissue/serum microRNA-21 | Biomarkers | Colorectal cancer | Bioinformatics |
مقاله انگلیسی |
7 |
Must the random man be unrelated? A lingering misconception in forensic genetics
آیا انسان تصادفی باید بی ارتباط باشد؟ یک باور غلط طولانی مدت در ژنتیک پزشکی قانونی-2020 A nearly universal practice among forensic DNA scientists includes mentioning an unrelated person as
the possible alternative source of a DNA stain, when one in fact refers to an unknown person. Hence,
experts typically express their conclusions with statements like: “The probability of the DNA evidence is
X times higher if the suspect is the source of the trace than if another person unrelated to the suspect is
the source of the trace.” Published forensic guidelines encourage such allusions to the unrelated person.
However, as the authors show here, rational reasoning and population genetic principles do not require
the conditioning of the evidential value on the unrelatedness between the unknown individual and the
person of interest (e.g., a suspect). Surprisingly, this important semantic issue has been overlooked for
decades, despite its potential to mislead the interpretation of DNA evidence by criminal justice system
stakeholders. Keywords: DNA evidence | Fact-finder | Match probability | Relatedness | Semantics |
مقاله انگلیسی |
8 |
High-performance spatiotemporal trajectory matching across heterogeneous data sources
کارایی بالا تطبیق مسیر مکانی و مکانی در منابع داده ناهمگن - سایپرز ، باشگاه دانش-2020 In the era of big data, the movement of the same object or person can be recorded by different
devices with different measurement accuracies and sampling rates. Matching and conflating these
heterogeneous trajectories help to enhance trajectory semantics, describe user portraits, and discover
specified groups from human mobility. In this paper, we proposed a high-performance approach
for matching spatiotemporal trajectories across heterogeneous massive datasets. Two indicators, i.e.,
Time Weighted Similarity (TWS) and Space Weighted Similarity (SWS), are proposed to measure
the similarity of spatiotemporal trajectories. The core idea is that trajectories are more similar if
they stay close in a longer time and distance. A distributed computing framework based on Spark
is built for efficient trajectory matching among massive datasets. In the framework, the trajectory
segments are partitioned into 3-dimensional space–time cells for parallel processing, and a novel
method of segment reference point is designed to avoid duplicated computation. We conducted
extensive matching experiments on real-world and synthetic trajectory datasets. The experimental
results illustrate that the proposed approach outperforms other similarity metrics in accuracy, and the
Spark-based framework greatly improves the efficiency in spatiotemporal trajectory matching. Keywords: Distributed computing | Spatiotemporal big data | Trajectory similarity | Trajectory matching |
مقاله انگلیسی |
9 |
Using big data database to construct new GFuzzy text mining and decision algorithm for targeting and classifying customers
استفاده از بانک اطلاعاتی داده های بزرگ برای ساخت الگوریتم تصمیم گیری متن کاوی GFuzzy برای هدف قرار دادن و طبقه بندی مشتریان-2019 After an enterprise builds a data warehouse, it can record information related to customer interactions using
structured and unstructured data. The intention is to convert these data into useful information for decisionmaking
to ensure business continuity. Hence, this study proposes a new Chinese text classification model for the
project management office (PMO) using fuzzy semantics and text mining techniques. First, content analysis is
performed on the unstructured data to convert important textual information and compile it into a keyword
index. Next, a classification and decision algorithm for grey situations and fuzzy (GFuzzy) is used to categorize
textual data by three characteristics: maximum impact, moderate impact, and minimum impact. The purpose is
to analyze consumer behaviors for the accurate classification of customers. Lastly, a more effective marketing
strategy is formulated to target the various customer combinations, growth models, and the best mode of service.
A company database of interactions with customers is used to construct a text mining model and to analyze the
decision process of its PMO. The purpose is to test the feasibility and validity of the proposed model so that
enterprises are provided with better marketing strategies and PMO processes aimed at their customers Keywords: Big data warehouse | Content analysis | Data mining | Fuzzy grey situation decision-making algorithm | Project management office | Customer relations management |
مقاله انگلیسی |
10 |
استخراج خودکار اطلاعات کارشناس برای پایگاه دانش اینترنت اشیا
سال انتشار: 2019 - تعداد صفحات فایل pdf انگلیسی: 7 - تعداد صفحات فایل doc فارسی: 14 با توسعه سریع تکنولوژی IOT، نیاز به بازدهی موثر و دقیق دامنه دانش در حال افزایش است. استخراج خودکار اطلاعات کارشناس از صفحات عظیم وب و مدل نمایشی پویا و یکپارچه برای پایگاه دانش مهم است. با این حال، تفاوت های آشکار در ساختار و معناشناسی محتوا از صفحات وب بین هر دو وبسایت نشان می دهد که خزنده وب سنتی، معنای صفحه وب را درک نمی کند و اطلاعات بحرانی کارشناس را استخراج می کند. بنابراین، یک مدل نمایه حرفه ای شش بعدی معرفی شد و سپس یک روش برچسب گذاری توالی با مدل LSTM-CRF برای استخراج اتوماتیک اطلاعات غنی معنادار مبتنی بر ساختار سازمانی، معنی کلمات و ویژگی های متخصصان ارائه شد. نتایج آزمایش بر روی مجموعه داده های آزمایشی نشان داد که نرخ دقیق و فراخوان در مورد تجربه کار و زمینه تحقیق کارشناسان به ترتیب 67.8٪، 66.6٪ و 82.4٪ و 79.6٪ است. علاوه بر این، میانگین F در مورد برخی از ویژگی های مشخص متخصص مانند نام، عنوان، ایمیل، دستاورد و غیره، به 82.5٪ می رسد که بهتر از نتایج الگوریتم های MEMM و LSTM است.
کلمات کلیدی: اینترنت اشیا | مدل مشخصات کارشناس | یادگیری عمیق | برچسب زدن تکراری |
مقاله ترجمه شده |