Measurement Science And Technology - IOPscience

Open all abstracts, in this tab

The following article is Open accessMedical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacy

Dimitris K Iakovidis et al 2025 Meas. Sci. Technol. 36 103001

Open abstractView article, Medical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacyPDF, Medical & healthcare robotics: a roadmap for enhanced precision, safety, and efficacy

Medical robotics holds transformative potential for healthcare. Robots excel in tasks requiring precision, including surgery and minimally invasive interventions, and they can enhance diagnostics through improved automated imaging techniques. Despite the application potentials, the adoption of robotics still faces obstacles, such as high costs, technological limitations, regulatory issues, and concerns about patient safety and data security. This roadmap, authored by an international team of experts, critically assesses the state of medical robotics, highlighting existing challenges and emphasizing the need for novel research contributions to improve patient care and clinical outcomes. It explores advancements in machine learning, highlighting the importance of trustworthiness and interpretability in robotics, the development of soft robotics for surgical and rehabilitation applications, and the role of image-guided robotic systems in diagnostics and therapy. Mini, micro, and nano robotics for surgical interventions, as well as rehabilitation and assistive robots, are also discussed. Furthermore, the roadmap addresses service robots in healthcare, covering navigation, logistics, and telemedicine. For each of the topics addressed, current challenges and future directions to improve patient care through medical robotics are suggested.

https://doi.org/10.1088/1361-6501/ae09bfThe following article is Open accessUncertainty quantification in particle image velocimetry

A Sciacchitano 2019 Meas. Sci. Technol. 30 092001

Open abstractView article, Uncertainty quantification in particle image velocimetryPDF, Uncertainty quantification in particle image velocimetry

Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.

This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.

Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.

As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.

https://doi.org/10.1088/1361-6501/ab1db8The following article is Open accessTime-gated Raman spectroscopy – a review

Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002

Open abstractView article, Time-gated Raman spectroscopy – a reviewPDF, Time-gated Raman spectroscopy – a review

Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.

In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.

https://doi.org/10.1088/1361-6501/abb044The following article is Open accessA rapid SINS alignment method based on observational measurement extension

Yong Yang et al 2025 Meas. Sci. Technol. 36 125005

Open abstractView article, A rapid SINS alignment method based on observational measurement extensionPDF, A rapid SINS alignment method based on observational measurement extension

Initial alignment is one of the critical technologies for the smooth and effective operation of the strapdown inertial navigation system (SINS), and the alignment time and accuracy are the two deterministic indicators of SINS initial alignment performance. To address the issues of redundant alignment time and low accuracy in the initial alignment process, a new method utilizing system observations is proposed, which includes velocity errors, equivalent specific force outputs, and equivalent gyro angular velocity errors. System state equations and observation equations are established based on the fundamental principles and error characteristics of inertial sensors. Singular value decomposition was employed to calculate the singular values of each system’s observation matrix, and observability was determined through singular value analysis. Experimental results indicate that the inclusion of equivalent specific force outputs accelerates the convergence of the horizontal misalignment angle, while equivalent gyro angular velocity errors expedite the convergence of the azimuth misalignment angle. Proposed method with enhanced system observability improves alignment accuracy significantly, with horizontal accuracy increased by approximately 62% and vertical accuracy by about 42% compared to a filtering model using only velocity errors as observations.

https://doi.org/10.1088/1361-6501/ae0149The following article is Open accessPhysics-informed deep-learning applications to experimental fluid mechanics

Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303

Open abstractView article, Physics-informed deep-learning applications to experimental fluid mechanicsPDF, Physics-informed deep-learning applications to experimental fluid mechanics

High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers’ equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.

https://doi.org/10.1088/1361-6501/ad3fd3The following article is Open accessA hybrid method for fault diagnosis of rolling bearings

Yuchen He et al 2024 Meas. Sci. Technol. 35 125012

Open abstractView article, A hybrid method for fault diagnosis of rolling bearingsPDF, A hybrid method for fault diagnosis of rolling bearings

Traditional diagnostic methods often have insufficient accuracy and noise reduction, which leads to diagnostic errors. To address these issues, this paper proposes an advanced fault diagnosis model that combines the variational mode decomposition (VMD) improved by a Variable-Objective Search Whale Optimization Algorithm (VSWOA) with a Pelican Optimization (PO)-boosted Kernel Extreme Learning Machine (KELM) algorithm. The application of the method is shown here in the fault diagnosis of rolling bearings. The proposed VSWOA enhances the performance of VMD by incorporating a Sobol sequence, nonlinear time-varying factors, a multi-objective initial search strategy, and an elite Cauchy chaos mutation strategy, significantly improving noise reduction in vibration signals. Fault information is precisely extracted using waveform factors, sample entropy, and advanced composite multiscale fuzzy entropy, which enables effective feature screening and dimensionality reduction. The POA fine-tunes the KELM parameters, increasing the classification accuracy. The effectiveness of the model is verified through experimental evaluations using bearing data with injected Gaussian noise (from Case Western Reserve University) and the SpectraQuest datasets, where significant improvements in noise reduction and fault detection accuracy are achieved.

https://doi.org/10.1088/1361-6501/ad774dThe following article is Open accessReview of principles and research progress of precision measurement methods based on wavefront interference imaging

Liang Yu et al 2025 Meas. Sci. Technol. 36 122001

Open abstractView article, Review of principles and research progress of precision measurement methods based on wavefront interference imagingPDF, Review of principles and research progress of precision measurement methods based on wavefront interference imaging

Advanced manufacturing, precision metrology, space exploration, and other fields increasingly rely on multi-degree-of-freedom (multi-DOF) and high-precision measurements. The demands of such measurement systems—structural complexity, systematic error control, and information decoupling—have challenged traditional interferometric techniques. Wavefront interference imaging, which integrates laser interferometry with image analysis, has emerged as an advanced technique capable of subnanometer displacement and submicroradian angular resolution using a single laser beam. This method has gained importance in multi-DOF measurement technologies because it simultaneously obtains ultra-precise multi-DOF measurements with a compact setup, strong decoupling capability, and high integrability. This review systematically examines the development of wavefront interference imaging. Beginning with physical modeling of interference fringes, it traces the evolution of representative measurement models from two-dimensional to six-DOF configurations and analyzes the potential integration of this technique with emerging deep learning–based fringe processing methods. The paper further discusses frequency and phase decoupling algorithms in both the spatial and spectral domains and summarizes recent applications of this technology to nanometric coordinate measurements, atomic force microscopy, laser leveling, and spacecraft systems. The transition of wavefront interference imaging, from single-DOF extraction to coupled modeling and real-time resolution of multi-DOFs, demonstrates the excellent system scalability and application potential of this technology. This review aims to establish a theoretical framework and developmental roadmap for wavefront interference imaging, facilitating the advancement of high-precision, high-dimensional measurement systems in related domains.

https://doi.org/10.1088/1361-6501/ae2498The following article is Open access3D reconstruction and volume measurement of irregular objects based on RGB-D camera

Yu Zhu et al 2024 Meas. Sci. Technol. 35 125010

Open abstractView article, 3D reconstruction and volume measurement of irregular objects based on RGB-D cameraPDF, 3D reconstruction and volume measurement of irregular objects based on RGB-D camera

To address the challenge of measuring volumes of irregular objects, this paper proposes a volume measurement method based on 3D point cloud reconstruction. The point clouds of the object with multiple angles are obtained from an RGB-D camera mounted on a robotic arm, and then are reconstructed to form a whole complete point cloud to calculate the volume of the object. Firstly, the robotic arm is controlled to move to four angles for capturing the original point clouds of the target. Then, by using the rotation and translation matrices obtained from the calibration block pre-registration, the point clouds data from the four angles are fused and reconstructed. Subsequently, the issue of missing bottom point cloud data is addressed using a bottom-filling algorithm. Following this, the efficiency of the point cloud volume calculation algorithm is enhanced through the application of axis-aligned bounding box filtering. Finally, the reconstructed point cloud volume is calculated using a slicing algorithm that integrates 2D point cloud segmentation and point cloud sorting. Experimental results show that this method achieves a volume measurement accuracy of over 95% for irregular objects and exhibits good robustness.

https://doi.org/10.1088/1361-6501/ad7621The following article is Open accessAEEFCSR: an adaptive ensemble empirical feed-forward cascade stochastic resonance system for weak signal detection

Li Che et al 2024 Meas. Sci. Technol. 35 126108

Open abstractView article, AEEFCSR: an adaptive ensemble empirical feed-forward cascade stochastic resonance system for weak signal detectionPDF, AEEFCSR: an adaptive ensemble empirical feed-forward cascade stochastic resonance system for weak signal detection

A novel adaptive ensemble empirical feed-forward cascade stochastic resonance (AEEFCSR) method is proposed in this study for the challenges of detecting target signals from intense background noise. At first, we create an unsaturated piecewise self-adaptive variable-stable potential function to overcome the limitations of traditional potential functions. Subsequently, based on the foundation of a feed-forward cascaded stochastic resonance method, a novel weighted function and system architecture is created, which effectively addresses the issue of low-frequency noise enrichment through ensemble empirical mode decomposition. Lastly, inspired by the spider wasp algorithm and nutcracker optimization algorithm, the spider wasp nutcracker optimization algorithm is proposed to optimize the system parameters and overcome the problem of relying on manual experience. In this paper, to evaluate its performance, the output signal-to-noise ratio (SNR), spectral sub-peak difference, and time-domain recovery capability are used as evaluation metrics. The AEEFCSR method is demonstrated through theoretical analysis. To further illustrate the performance of the AEEFCSR method, Validate the adoption of multiple engineering datasets. The results show that compared with the compared algorithms, the output SNR of the AEEFCSR method is at least 6.2801 dB higher, the spectral subpeak difference is more than 0.25 higher, and the time-domain recovery effect is more excellent. In summary, the AEEFCSR method has great potential for weak signal detection in complex environments.

https://doi.org/10.1088/1361-6501/ad7480The following article is Open accessPIV uncertainty quantification from correlation statistics

Bernhard Wieneke 2015 Meas. Sci. Technol. 26 074002

Open abstractView article, PIV uncertainty quantification from correlation statisticsPDF, PIV uncertainty quantification from correlation statistics

The uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images. First the second image is dewarped back onto the first one using the computed displacement field which provides two almost perfectly matching images. Differences are analyzed regarding the effect of shifting the peak of the correlation function. A relationship is derived between the standard deviation of intensity differences in each interrogation window and the expected asymmetry of the correlation peak, which is then converted to the uncertainty of a displacement vector. This procedure is tested with synthetic data for various types of noise and experimental conditions (pixel noise, out-of-plane motion, seeding density, particle image size, etc) and is shown to provide an accurate estimate of the true error.

https://doi.org/10.1088/0957-0233/26/7/074002

Từ khóa » Mstcn.ỏg