Digital Livestock Farming
As the global human population increases, livestock agriculture must adapt to provide more livestock products and with improved efficiency while also addressing concerns about animal welfare, environmental sustainability, and public health. The purpose of this paper is to critically review the current state of the art in digitalizing animal agriculture with Precision Livestock Farming (PLF) technologies, specifically biometric sensors, big data, and blockchain technology. Biometric sensors include either noninvasive or invasive sensors that monitor an individual animal’s health and behavior in real time, allowing farmers to integrate this data for population-level analyses. Real-time information from biometric sensors is processed and integrated using big data analytics systems that rely on statistical algorithms to sort through large, complex data sets to provide farmers with relevant trending patterns and decision-making tools. Sensors enabled blockchain technology affords secure and guaranteed traceability of animal products from farm to table, a key advantage in monitoring disease outbreaks and preventing related economic losses and food-related health pandemics. Thanks to PLF technologies, livestock agriculture has the potential to address the abovementioned pressing concerns by becoming more transparent and fostering increased consumer trust. However, new PLF technologies are still evolving and core component technologies (such as blockchain) are still in their infancy and insufficiently validated at scale. The next generation of PLF technologies calls for preventive and predictive analytics platforms that can sort through massive amounts of data while accounting for specific variables accurately and accessibly. Issues with data privacy, security, and integration need to be addressed before the deployment of multi-farm shared PLF solutions be- comes commercially feasible. Implications Advanced digitalization technologies can help modern farms optimize economic contribution per animal, reduce the drudgery of repetitive farming tasks, and overcome less effective isolated solutions. There is now a strong cultural emphasis on reducing animal experiments and physical contact with animals in-order-to enhance animal welfare and avoid disease outbreaks. This trend has the potential to fuel more research on the use of novel biometric sensors, big data, and blockchain technology for the mutual benefit of livestock producers, consumers, and the farm animals themselves. Farmers’ autonomy and data-driven farming approaches compared to experience-driven animal manage- ment practices are just several of the multiple barriers that digitalization must overcome before it can become widely implemented.
Keywords: Precision Livestock Farming | digitalization | Digital Technologies in Livestock Systems | sensor technology | big data | blockchain | data models | livestock agriculture
Data Driven Robust Optimization for Handling Uncertainty in Supply Chain Planning Models
بهینه سازی قوی مبتنی بر داده ها برای مدیریت عدم قطعیت در مدل های برنامه ریزی زنجیره تامین-2021
While addressing supply chain planning under uncertainty, Robust Optimization (RO) is regarded as an efficient and tractable method. As RO involves calculation of several statistical moments or maximum / minimum values involving the objective functions under realizations of these uncertain parameters, the accuracy of this method significantly depends on the efficient techniques to sample the uncertainty parameter space with limited amount of data. Conventional sampling techniques, e.g. box/budget/ellipsoidal, work by sampling the uncertain parameter space inefficiently, often leading to inaccuracies in such estimations. This paper proposes a methodology to amalgamate machine learning and data analytics with RO, thereby making it data-driven. A novel neuro fuzzy clustering mechanism is implemented to cluster the uncertain space such that the exact regions of uncertainty are optimally identified. Subsequently, local density based boundary point detection and Delaunay triangulation based boundary construction enable intelligent Sobol based sampling to sample the uncertain parameter space more accurately. The proposed technique is utilized to explore the merits of RO towards addressing the uncertainty issues of product demand, machine uptime and production cost associated with a multiproduct, and multisite supply chain planning model. The uncertainty in supply chain model is thoroughly analysed by carefully constructing examples and its case studies leading to large scale mixed integer linear and nonlinear programming problems which were efficiently solved in GAMS framework. Demonstration of efficacy of the proposed method over the box, budget and ellipsoidal sampling method through comprehensive analysis adds to other highlights of the current work.
Keywords: Uncertainty Modelling | Supply chain Management | Data driven Robust Optimization | Neuro Fuzzy Clustering | Multi-Layered Perceptron
A framework based on BWM for big data analytics (BDA) barriers in manufacturing supply chains
چارچوبی مبتنی بر BWM برای موانع تجزیه و تحلیل داده های بزرگ (BDA) در تولید زنجیره های تأمین-2021
Due to its potential utility, Big Data (BD) recently attracted researchers and practitioners in decision- making. Big Data analytics (BDA) becomes more common among manufacturing companies because it lets them gain insight and make decisions based on BD. Given the importance of both BD and BDA, this study aims to identify and analyse essential BDA adoption barriers in supply chains. This study explores the current knowledge base using a BWM (Best Worst Method) to discuss these barriers. Data were obtained from five Indian manufacturing companies. Research findings show that data-related barriers are most significant. The findings will help managers understand the exact nature of the challenges and possible advantages of the BDA and implement BDA policies for the growth and output of supply chain operations.© 2021 Elsevier Ltd. All rights reserved. Selection and peer-review under responsibility of the scientific committee of the 3rd International e-Con- ference on Frontiers in Mechanical Engineering and nano Technology.
Keywords: Big data analytics | Barriers | Manufacturing supply chains | Best worst method (BWM)
Framework of Data Analytics and Integrating Knowledge Management
چارچوب تجزیه و تحلیل داده ها و ادغام مدیریت دانش-2021
Big data is significantly dependent on technologies such as cloud computing, machine learning and statistical models. However, its significance is becoming more dependent on human qualities e.g. judgment, value, intuition and experience. Therefore, the human knowledge presents a basis for knowledge management and big data, which are a major element of data analytics. This research contribution applies the process of Data, Information, Knowledge and Perception hierarchy as a structure to evaluate the end-users’ process. The framework in incorporating data analytics and display a conceptual data analytics process (with three phases) evaluated as knowledge management, including the creation, discovery and application of knowledge. Knowledge conversion theories are applicable in data analytics to emphasize on the typically overlooked organizational and human aspects, which are critical to the efficiency of data analytics. The synergy and alignment between knowledge management and data analytics is fundamental in fostering innovations and collaboration.
keywords: تحلیل داده ها | مدیریت دانش | داده های بزرگ | هوش تجاری | کشف داده ها | Data analytics | Knowledge management | Big data | Business intelligence | Data discovery
Computer vision approach to characterize size and shape phenotypes of horticultural crops using high-throughput imagery
رویکرد بینایی رایانه ای برای توصیف فنوتیپ های اندازه و شکل محصولات باغی با استفاده از تصاویر با توان بالا-2021
For many horticultural crops, variation in quality (e.g., shape and size) contributes significantly to the crop’s market value. Metrics characterizing less subjective harvest quantities (e.g., yield and total biomass) areroutinely monitored. In contrast, metrics quantifying more subjective crop quality characteristics such as ideal size and shape remain difficult to characterize objectively at the production-scale due to the lack of modular technologies for high-throughput sensing and computation. Several horticultural crops are sent to packing facilities after having been harvested, where they are sorted into boxes and containers using high-throughput scanners. These scanners capture images of each fruit or vegetable being sorted and packed, but the images are typically used solely for sorting purposes and promptly discarded. With further analysis, these images could offer unparalleled insight on how crop quality metrics vary at the industrial production-scale and provide further insight into how these characteristics translate to overall market value. At present, methods for extracting and quantifying quality characteristics of crops using images generated by existing industrial infrastructure have not been developed. Furthermore, prior studies that investigated horticultural crop quality metrics, specifically of size and shape, used a limited number of samples, did not incorporate deformed or non-marketable samples, and did not use images captured from high-throughput systems. In this work, using sweetpotato (SP) as a use case, we introduce a computer vision algorithm for quantifying shape and size characteristics in a high-throughput manner. This approach generates 3D model of SPs from two 2D images captured by an industrial sorter 90 degrees apart and extracts 3D shape features in a few hundred milliseconds. We applied the 3D reconstruction and feature extraction method to thousands of image samples to demonstrate how variations in shape features across SP cultivars can be quantified. We created a SP shape dataset containing SP images, extracted shape features, and qualitative shape types (U.S. No. 1 or Cull). We used this dataset to develop a neural network-based shape classifier that was able to predict Cull vs. U.S. No. 1 SPs with 84.59% accuracy. In addition, using univariate Chi-squared tests and random forest, we identified the most important features for determining qualitative shape type (U.S. No. 1 or Cull) of the SPs. Our study serves as a key step towards enabling big data analytics for industrial SP agriculture. The methodological framework is readily transferable to other horticultural crops, particularly those that are sorted using commercial imaging equipment.
Keywords: Crop phenotyping | Machine learning | Computer vision
Future Generation Computer Systems 116 (2021) 209–219
سیستم های کامپیوتری نسل آینده 116 (2021) 209-219-2021
An organisation wishing to conduct data analytics to support day-to-day decision making often needs a system to help analysts represent and maintain knowledge about research variables, datasets or analytical models, and effectively determine the best combination to use when solving the problem at hand. Often, such knowledge is not explicitly captured by the organisation. To address this problem, this paper presents the design of an innovative Information Technology (IT) platform which enables data sharing between different analytics models and provides the ability to extend or customise models or data sources without necessarily involving the analysts who created them. It can make analytics knowledge readily available and modifiable for future use and problem-solving by analysts and other stakeholders. In the context of our work, we organise analytics knowledge around the concept of a research variable, which analysts often use when defining and proving a hypothesis. By focusing on such a concept, this platform is particularly suited to develop empirical data analytics applications in any domain. This paper presents the architecture of this platform, including the knowledge base and the Application Programming Interface (API) layer. Capabilities of this platform are illustrated through a software prototype and a use case on property price prediction across Sydney, Australia.
keywords: تجزیه و تحلیل پیش بینی | مدیریت دانش | دانش محور | مدل سازی معنایی | هستی شناسی | Predictive analytics | Knowledge management | Knowledge base | Semantic modelling | Ontologies
Improving supply chain resilience through industry 4:0: A systematic literature review under the impressions of the COVID-19 pandemic
بهبود انعطاف پذیری زنجیره تأمین از طریق صنعت 4:0: بررسی ادبیات سیستماتیک تحت تأثیر همه گیری COVID-19-2021
The COVID-19 pandemic is one of the most severe supply chain disruptions in history and has challenged practitioners and scholars to improve the resilience of supply chains. Recent technological progress, especially industry 4.0, indicates promising possibilities to mitigate supply chain risks such as the COVID-19 pandemic. However, the literature lacks a comprehensive analysis of the link between industry 4.0 and supply chain resilience. To close this research gap, we present evidence from a systematic literature review, including 62 papers from high-quality journals. Based on a categorization of industry 4.0 enabler technologies and supply chain resilience antecedents, we introduce a holistic framework depicting the relationship between both areas while exploring the current state-of-the-art. To verify industry 4.0’s resilience opportunities in a severe supply chain disruption, we apply our framework to a use case, the COVID-19-affected automotive industry. Overall, our results reveal that big data analytics is particularly suitable for improving supply chain resilience, while other industry 4.0 enabler technologies, including additive manufacturing and cyber-physical systems, still lack proof of effectiveness. Moreover, we demonstrate that visibility and velocity are the resilience antecedents that benefit most from industry 4.0 implementation. We also establish that industry 4.0 holistically supports pre-disruption resilience measures, enabling more effective proactive risk management. Both research and practice can benefit from this study. While scholars may analyze resilience potentials of under-explored enabler technologies, practitioners can use our findings to guide industry 4.0 investment decisions.
Keywords: Industry 4.0 | Supply chain risk management | Supply chain resilience | Supply chain disruption | Digital supply chain | Literature review
A knowledge-based Digital Shadow for machining industry in a Digital Twin perspective
یک سایه دیجیتال مبتنی بر دانش برای صنعت ماشینکاری در یک چشم انداز دیجیتال دوتایی-2021
This paper addresses the problems of data management and analytics for decision-aid by proposing a new vision of Digital Shadow (DS) which would be considered as the core component of a future Digital Twin. Knowledge generated by experts and artificial intelligence, is transformed into formal business rules and integrated into the DS to enable the characterization of the real behavior of the physical system throughout its operation stage. This behavior model is continuously enriched by direct or derived learning, in order to improve the digital twin. The proposed DS relies on data analytics (based on unsupervised learning) and on a knowledge inference engine. It enables the incidents to be detected and it is also able to decipher its operational context. An example of this application in the aeronautic machining industry is provided to stress both the feasibility of the proposition and its potential impact on shop floor performance.
keywords: سایه دیجیتال | دوقلو | داده ها و مدیریت دانش | ماشینکاری | Digital shadow | Digital twin | Data and knowledge management | Machining
Implementation of a Vision-Based Worker Assistance System in Assembly: a Case Study
پیاده سازی سیستم کمک کارگری مبتنی بر دید در مونتاژ: مطالعه موردی-2021
The current introduction of Industry 4.0 is very challenging for industrial companies. On the one hand, there is an urge to implement concepts such as digital worker assistance systems or cyber-physical production systems, but besides theoretical work, there is very little research that shows examples of its practical implementation. Furthermore, there is currently a lack of a clear model of how sensor-based worker assistance systems for data acquisition and analytics can be designed and systematically implemented. In the present research, a model for a vision-based worker assistance system for assembly was developed based on an industrial case study regarding a manual assembly line. The proposed model consists of five integrated modules: data acquisition, data preprocessing, data storage, data analysis, and simulation. The data acquisition module was constructed in the assembly workstation of the production line by implementing a depth camera, which together with an algorithm developed in Python for preprocessing, tracks the activities of the operator and inserts the processing times into a SQL table of the data storage module. This module contains all the relevant information of the production system, from the shop floor to the Manufacturing Execution System, enabling vertical integration. The data analysis module, aimed at the streaming and predictive analytics, was deployed in the RStudio platform. Likewise, the simulation module was conceptualized to retrieve real-time data from the shop floor and to select the best strategy. To evaluate the model testing of the proposed system in real production was performed. The results of this use case provide useful information for academia as well as practitioners how to implement vision-based worker assistance systems.© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)Peer-review under responsibility of the scientific committee of the 8th CIRP Global Web Conference – Flexible Mass Customisation
Keywords: industry 4.0 | data analytics | cyber-physical production system | computer vision | smart manufacturing | assembly ponsibility
Integrating big data analytics into supply chain finance: The roles of information processing and data-driven culture
ادغام تجزیه و تحلیل داده های بزرگ در امور مالی زنجیره تأمین: نقش پردازش اطلاعات و فرهنگ داده محور-2021
The role of big data in implementing supply chain finance (SCF) initiatives lacks empirical study. There is little guidance available for managers on developing an integrated SCF process in the era of big data. Using organizational information processing theory, this study develops and empirically tests a theoretical framework that investigates the effect of big data analytics capability (BDAC) on SCF Integration, and the moderating effect of data-driven culture. The hypothesized relationships were tested using structural equation modelling and moderated regression analysis, with primary survey data collected from a sample of 307 manufacturing firms in China. The results indicate that BDAC has a significant positive effect on internal SCF Integration, and internal SCF Integration fully mediates the relationships between BDAC and SCF Integration with customers and sup- pliers. Data-driven culture significantly moderates the effect of BDAC on internal SCF Integration. These empirical findings provide timely and useful guidance for managers on using big data analytics and data-driven culture to implement integrated SCF practices to survive in today’s data-rich and uncertain environment.
Keywords: Big data analytics capability | Data-driven culture | Integrated supply chain finance | Information processing capability