Introducing an application of an industry 4:0 solution for circular supply chain management
معرفی کاربرد راه حل صنعت 4:0 برای مدیریت حلقه تأمین دایره ای-2021
In recent years, sustainable supply chain management practices have been adopted by companies that desire to reduce the negative environmental and social impacts within their supply chains. Within this perspective, a circular approach has been developed in the supply chain literature. Circular economy models and solutions assisted by industry 4.0 technologies have been developed to transform products in the end of their life cycle into new products with different use. In this paper an industry 4.0 waste-to- energy solution is developed and applied in a pilot case study comprised by a real-world supply chain to evaluate the sustainability performance of circular supply chain management (CSCM). The ﬁndings show that redesigning supply chains for circular economy with the use of Industry 4.0 technologies, can enable circular supply chain management. Clear beneﬁts are provided linking the proposed solution to the six circular economy dimensions of the ReSOLVE model i.e. regenerate, share, optimize, loop, virtualise, and exchange. Improved availability of personnel (5% and 15%) and ﬂeet resources (15%) are identiﬁed as some of the key quantitative beneﬁts, while supply chain traceability through the full visibility and automation offered by the proposed solution, are some of the key non-quantiﬁable out- comes. The present work seeks to contribute to the existing literature by providing empirical evidence of how industry 4.0 and circular economy are applied in practice. Implications for managers and policy makers, along with the study limitations and further research paths are also presented.© 2021 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-NDlicense (http://creativecommons.org/licenses/by-nc-nd/4.0/).
Keywords: Circular economy | Circular supply chain management (CSCM) | Industry 4.0 | Waste-to-energy | ReSOLVE model
Generative Deep Learning in Digital Pathology Workflows
یادگیری عمیق مولد در گردش کار آسیب شناسی دیجیتال-2021
Many modern histopathology laboratories are in the process of digitizing their workflows. Digitization of tissue images has made it feasible to research the augmentation or automation of clinical reporting and diagnosis. The application of modern computer vision techniques, based on deep learning, promises systems that can identify pathologies in slide images with a high degree of accuracy. Generative modeling is an approach to machine learning and deep learning that can be used to transform and generate data. It can be applied to a broad range of tasks within digital pathology, including the removal of color and intensity artifacts, the adaption of images in one domain into those of another, and the generation of synthetic digital tissue samples. This review provides an introduction to the topic, considers these applications, and discusses future directions for generative models within histopathology.
Weight and volume estimation of poultry and products based on computer vision systems: a review
Weight and volume estimation of poultry and products based on computer vision systems: a review-2021
The appearance, size, and weight of poultry meat and eggs are essential for production economics and vital in the poultry sector. These external characteristics influence their market price and consumers’ preference and choice. With technological developments, there is an increase in the application and importance of vision systems in the agricultural sector. Computer vision has become a promising tool in the realtime automation of poultry weighing and processing systems. Owing to its noninvasive and nonintrusive nature and its capacity to present a wide range of information, computer vision systems can be applied in the size, mass, volume determination, and sorting and grading of poultry products. This review article gives a detailed summary of the current advances in measuring poultry products’ external characteristics based on computer vision systems. An overview of computer vision systems is discussed and summarized. A comprehensive presentation of the application of computer vision-based systems for assessing poultry meat and eggs was provided, that is, weight and volume estimation, sorting, and classification. Finally, the challenges and potential future trends in size, weight, and volume estimation of poultry products are reported.
Key words: classification | computer vision | egg | weight estimation | poultry product
Deep learning-based computer vision to recognize and classify suturing gestures in robot-assisted surgery
بینایی عمیق مبتنی بر یادگیری برای تشخیص و طبقه بندی حرکات بخیه در جراحی با کمک روبات-2021
Background: Our previous work classified a taxonomy of suturing gestures during a vesicourethral anastomosis of robotic radical prostatectomy in association with tissue tears and patient outcomes. Herein, we train deep learning-based computer vision to automate the identification and classification of suturing gestures for needle driving attempts.
Methods: Using two independent raters, we manually annotated live suturing video clips to label timepoints and gestures. Identification (2,395 videos) and classification (511 videos) datasets were compiled to train computer vision models to produce 2- and 5-class label predictions, respectively. Networks were trained on inputs of raw red/blue/green pixels as well as optical flow for each frame. Each model was trained on 80/20 train/test splits.
Results: In this study, all models were able to reliably predict either the presence of a gesture (identification, area under the curve: 0.88) as well as the type of gesture (classification, area under the curve: 0.87) at significantly above chance levels. For both gesture identification and classification datasets, we observed no effect of recurrent classification model choice (long short-term memory unit versus convolutional long short-term memory unit) on performance.
Conclusion: Our results demonstrate computer vision’s ability to recognize features that not only can identify the action of suturing but also distinguish between different classifications of suturing gestures. This demonstrates the potential to utilize deep learning computer vision toward future automation of surgical skill assessment.
Real-time plant phenomics under robotic farming setup: A vision-based platform for complex plant phenotyping tasks
پدیده های گیاهی در زمان واقعی تحت راه اندازی رباتیک کشاورزی: یک پلت فرم مبتنی بر دید برای کارهای پیچیده فنوتیپ سازی گیاهان-2021
Plant phenotyping in general refers to quantitative estimation of the plant’s anatomical, ontogenetical, physiological and biochemical properties. Analyzing big data is challenging, and non-trivial given the different complexities involved. Efficient processing and analysis pipelines are the need of the hour with the increasing popularity of phenotyping technologies and sensors. Through this work, we largely address the overlapping object segmentation & localization problem. Further, we dwell upon multi-plant pipelines that pose challenges as detection and multi-object tracking becomes critical for single frame/set of frames aimed towards uniform tagging & visual features extraction. A plant phenotyping tool named RTPP (Real-Time Plant Phenotyping) is presented that can aid in the detection of single/multi plant traits, modeling, and visualization for agricultural settings. We compare our system with the plantCV platform. The relationship of the digital estimations, and the measured plant traits are discussed that plays a vital roadmap towards precision farming and/or plant breeding.
Keywords: Phenotype | Image processing | Spectral | Robotics | Object localization | Precision agriculture | Plant science | Pattern recognition | Computer vision | Automation | Perception
Civil engineering stability inspection based on computer vision and sensors
بازرسی پایداری مهندسی عمران بر اساس بینایی ماشین و حسگرها-2021
A computer that combines the purchase of vision technology and remote cameras and drones offers a promising non-contact solution for the state evaluation of civil infrastructure. This system’s ultimate goal is too automatically and reliably converted to actionable information image or video data. This white paper provides an overview of computer vision technology’s latest development and applies it to the state evaluation of private infrastructure. Deep learning has been applied to various computer vision; deep learning course covers most of the application. Each application has its architecture, such as the input image and labels data loss function. To explain computer vision architecture in the following figure. Review of the work can be divided into two types: application checks and application monitoring. Review inspection applications include context identifiers, local and global features, visible damage, and changes in the reference image. Monitoring applications described herein include static and dynamic strain modal analysis measurement and displacement measurement. Next, several key challenges continue to move towards civilian infrastructure automation and monitoring of vision- based. Finally, aim to address some of the ongoing challenges in our work.
Keywords: Monitoring applications | Computer vision | Accelerometer | Non-destructive evaluation | Conventional-contact displacement sensors
Towards practical 2D grapevine bud detection with fully convolutional networks
به سوی تشخیص جوانه انگور انگور 2 بعدی با شبکه های کاملا پیچشی-2021
In Viticulture, visual inspection of the plant is a necessary task for measuring relevant variables. In many cases, these visual inspections are susceptible to automation through computer vision methods. Bud detection is one such visual task, central for the measurement of important variables such as: measurement of bud sunlight exposure, autonomous pruning, bud counting, type-of-bud classification, bud geometric characterization, internode length, bud area, and bud development stage, among others. This paper presents a computer method for grapevine bud detection based on a Fully Convolutional Networks Mobile Net architecture (FCN-MN). To validate its performance, this architecture was compared in the detection task with a strong method for bud detection, Scanning Windows (SW) based on a patch classifier, showing improvements over three aspects of detection: segmentation, correspondence identification and localization. The best version of FCN-MN showed adetection F1-measure of 88.6% (for true positives defined as detected components whose intersection-over-union with the true bud is above 0.5), and false positives that are small and near the true bud. Splits –false positives overlapping the true bud– showed a mean segmentation precision of 89.3%(21.7), while false alarms –false positives not overlapping the true bud– showed a mean pixel area of only 8% the area of a true bud, and a distance (between mass centers) of 1.1 true bud diameters. The paper concludes by discussing how these results for FCN-MN would produce sufficiently accurate measurements of bud variables such as bud number, bud area, and internode length, suggesting a good performance in a practical setup.
Keywords: Computer vision | Fully convolutional network | Grapevine bud detection | Precision viticulture
A deep learning approach to measure stress level in plants due to Nitrogen deficiency
یک روش یادگیری عمیق برای اندازه گیری سطح تنش در گیاهان به دلیل کمبود نیتروژن-2021
Stress due to nutrients deficiency in plants can reduce the agricultural yield significantly. Nitrogen, an essential nutrient, is a crucial growth-limiting factor and is the prime component of amino acids, proteins, nucleic acids, and chlorophyll. Nitrogen deficiency affects certain visible plant traits such as area, color, the number of leaves and plant height, etc. With the recent advancements in imaging technology, computer vision-based plant phenomics has become a promising field of plant research and management. Such imaging-based techniques are non-destructive and much faster with higher levels of automation. In this work, we have proposed an automatic image-based plant phenotyping approach for stress classification in plant shoot images. In this proposed phenotyping approach, a 23-layered deep learning technique is proposed and compared with traditional Machine Learning techniques and few other deep architectures. Results reveal that a simple 23-layered deep learning architecture is comparable to the established state of art deep learning architectures like ResNet18 and NasNet Large (having millions of trainable parameters) in yielding ceiling level stress classification from plant shoot images. In addition, the proposed model also outperforms traditional Machine Learning techniques by achieving an average of 8.25% better accuracy.
Keywords: Computer vision | Deep learning | Nitrogen stress | Plant phenotyping
A critical review for machining positioning based on computer vision
مروری انتقادی برای مکان یابی ماشینکاری بر اساس بینایی ماشین-2021
With the rapid development of science and technology, the manufacturing industry has to cope with increasingly stricter requirements in terms of the quality of processed products. To improve production flexibility and automation, computer vision is widely used in machining due to its safety, reliability, continuity, high accuracy, and real-time performance. In this study, a comprehensive review of positioning methods for workpieces in machining is presented from the perspective of computer vision technology. First, the key technologies in image acquisition are described in detail, and a analysis of different lighting modes is conducted. Second, image preprocessing is described by summarizing enhancement and image segmentation methods. Third, from the perspectives of accuracy and speed, feature extraction methods are compared and evaluated. Next, the existing applications of visual positioning technology in machining are discussed. Finally, the existing problems are summarized, and future research directions technology suggested.
Keywords: Visual positioning | Positioning processing | Optical system | Image preprocessing | Feature extraction
Detecting cooking state of grilled chicken by electronic nose and computer vision techniques
تشخیص وضعیت پخت مرغ کبابی توسط بینی الکترونیکی و تکنیک های بینایی ماشین-2021
Determination of food doneness remains a challenge for automation in the cooking industry. The complex physicochemical processes that occur during cooking require a combination of several methods for their control. Herein, we utilized an electronic nose and computer vision to check the cooking state of grilled chicken. Thermogravimetry, differential mobility analysis, and mass spectrometry were employed to deepen the fundamental insights towards the grilling process. The results indicated that an electronic nose could distinguish the odor profile of the grilled chicken, whereas computer vision could identify discoloration of the chicken. The integration of these two methods yields greater selectivity towards the qualitative determination of chicken doneness. The odor profile is matched with detected water loss, and the release of aromatic and sulfur-containing compounds during cooking. This work demonstrates the practicability of the developed technique, which we compared with a sensory evaluation, for better deconvolution of food state during cooking.
Keywords: Chemometrics | Computer vision | Electronic nose | Food preparation | Sensory analysis