Concurrent validity of a custom computer vision algorithm for measuring lumbar spine motion from RGB-D camera depth data
اعتبار همزمان یک الگوریتم بینایی ماشین سفارشی برای اندازه گیری حرکت ستون فقرات کمری از داده های عمق دوربین RGB-D-2021
Using RGB-D cameras as an alternative motion capture device can be advantageous for biomechanical spine motion assessments of movement quality and dysfunction due to their lower cost and complexity. In this study, we evaluated RGB-D camera performance relative to gold-standard optoelectronic motion capture equipment. Twelve healthy young adults (6M, 6F) were recruited to perform repetitive spine flexion-extension, while wearing infrared reflective marker clusters placed over their T10-T12 spinous processes and sacrum, and motion capture data were recorded simultaneously by both systems. Custom computer vision algorithms were developed to extract spine angles from depth data. Root mean square error (RMSE) was calculated for continuous Euler angles, and intra class correlation coefficients (ICC2,1) were calculated between minimum and maximum angles and range of motion in all movement planes. RMSE was low (RMSE ≤ 2.05◦ ) and reliability was good to excellent(0.849 ≤ ICC2,1 ≤ 0.979) across all movement planes. In conclusion, the proposed algorithm for tracking 3Dlumbar spine motion during a sagittal movement task from one RGB-D camera is reliable in comparison to gold- standard motion tracking equipment. Future research will investigate accuracy and validity in a wider variety of movements, and will also investigate the development of novel methods to measure spine motion without using infrared reflective markers.
Keywords: RGB-D cameras | Computer vision | Depth camera | Low back pain | Movement quality
Fast heuristic method to detect people in frontal depth images
روش سریع ابتکاری برای تشخیص افراد در تصاویر عمق جلو-2021
This paper presents a new method for detecting people using only depth images captured by a camera in a frontal position. The approach is based on first detecting all the objects present in the scene and determining their average depth (distance to the camera). Next, for each object, a 3D Region of Interest (ROI) is processed around it in order to determine if the characteristics of the object correspond to the biometric characteristics of a human head. The results obtained using three public datasets captured by three depth sensors with different spatial resolutions and different operation principle (structured light, active stereo vision and Time of Flight) are presented. These results demonstrate that our method can run in Realtime using a low-cost CPU platform with a highaccuracy, being the processing times smaller than 1 ms per frame for a 512 × 424 image resolution with a precision of 99.26% and smaller than 4 ms per frame for a 1280 × 720 image resolution with a precision of99.77%.
Keywords: 3D People detection | Depth camera | Frontal Depth images | Feature extraction | Head biometric classification
Implementation of a Vision-Based Worker Assistance System in Assembly: a Case Study
پیاده سازی سیستم کمک کارگری مبتنی بر دید در مونتاژ: مطالعه موردی-2021
The current introduction of Industry 4.0 is very challenging for industrial companies. On the one hand, there is an urge to implement concepts such as digital worker assistance systems or cyber-physical production systems, but besides theoretical work, there is very little research that shows examples of its practical implementation. Furthermore, there is currently a lack of a clear model of how sensor-based worker assistance systems for data acquisition and analytics can be designed and systematically implemented. In the present research, a model for a vision-based worker assistance system for assembly was developed based on an industrial case study regarding a manual assembly line. The proposed model consists of five integrated modules: data acquisition, data preprocessing, data storage, data analysis, and simulation. The data acquisition module was constructed in the assembly workstation of the production line by implementing a depth camera, which together with an algorithm developed in Python for preprocessing, tracks the activities of the operator and inserts the processing times into a SQL table of the data storage module. This module contains all the relevant information of the production system, from the shop floor to the Manufacturing Execution System, enabling vertical integration. The data analysis module, aimed at the streaming and predictive analytics, was deployed in the RStudio platform. Likewise, the simulation module was conceptualized to retrieve real-time data from the shop floor and to select the best strategy. To evaluate the model testing of the proposed system in real production was performed. The results of this use case provide useful information for academia as well as practitioners how to implement vision-based worker assistance systems.© 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (https://creativecommons.org/licenses/by-nc-nd/4.0)Peer-review under responsibility of the scientific committee of the 8th CIRP Global Web Conference – Flexible Mass Customisation
Keywords: industry 4.0 | data analytics | cyber-physical production system | computer vision | smart manufacturing | assembly ponsibility
Potato feature prediction based on machine vision and 3D model rebuilding
پیش بینی ویژگی های سیب زمینی بر اساس بینایی ماشین و بازسازی مدل 3D-2017
Machine vision based on color, multispectral, and hyperspectral cameras to develop potato quality grad- ing can be used to predict length, width, and mass, as well as defects on the interior and exterior of a sam- ple. However, the images obtained by these cameras are limited by two-dimensional shape information, including width, length, and boundary. Other vital elements of appearance data related to potato mass and quality, including thickness, volume, and surface gradient changes are difficult to detect due to slight surface color differences and device limitations. In this study, we recorded the depth images of 110 pota- toes using a depth camera, including samples with uniform shapes or with deformations (e.g., bumps and divots). A novel method was developed for estimating potato mass and shape information and three- dimensional models were built utilizing a new image processing algorithm for depth images. Other fea- tures, including length, width, thickness, and volume were also calculated as mass prediction related fac- tors. Experimental results indicate that the proposed models accurately predict potato length, width, and thickness; the mean absolute errors for these predictions were 2.3 mm, 2.1 mm, and 2.4 mm, respec- tively, while the mean percentage errors were 2.5%, 3.5%, and 4.4%. Mass prediction based on a 3D volume model for both normal and deformed potato samples proved to be more accurate compared to models based on area calculation. Thus 93% of samples were graded by the correct size group using the volume density model while only 73% were graded correctly using the area density. This depth image processing is an effective potential method for future non-destructive post-harvesting grading, especially for prod- ucts where size, shape, and surface condition are important factors.© 2017 Elsevier B.V. All rights reserved.
Keywords:Machine vision | Potato | Depth image processing | 3D model building | Features prediction