Medical robotics holds transformative potential for healthcare. Robots excel in tasks requiring precision, including surgery and minimally invasive interventions, and they can enhance diagnostics through improved automated imaging techniques. Despite the application potentials, the adoption of robotics still faces obstacles, such as high costs, technological limitations, regulatory issues, and concerns about patient safety and data security. This roadmap, authored by an international team of experts, critically assesses the state of medical robotics, highlighting existing challenges and emphasizing the need for novel research contributions to improve patient care and clinical outcomes. It explores advancements in machine learning, highlighting the importance of trustworthiness and interpretability in robotics, the development of soft robotics for surgical and rehabilitation applications, and the role of image-guided robotic systems in diagnostics and therapy. Mini, micro, and nano robotics for surgical interventions, as well as rehabilitation and assistive robots, are also discussed. Furthermore, the roadmap addresses service robots in healthcare, covering navigation, logistics, and telemedicine. For each of the topics addressed, current challenges and future directions to improve patient care through medical robotics are suggested.

Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Dimitris K Iakovidis et al 2025 Meas. Sci. Technol. 36 103001
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
Ozlem Karsli et al 2026 Meas. Sci. Technol. 37 045901
Turkish Accelerator and Radiation Laboratory (TARLA) is proposed as the first accelerator-based user facility within the framework of the Turkish Accelerator Center project initiative. Following the completion of technical design studies and infrastructure development, the installation of accelerator components commenced. TARLA is designed to produce free-electron laser (FEL) and bremsstrahlung radiation utilizing state-of-the-art superconducting accelerator technology. The facility aims to operate FELs in the wavelength range of 5–350 µm by accelerating electrons to energies between 15 and 40 MeV, and to generate bremsstrahlung radiation with electrons accelerated up to 30 MeV. As a major milestone, the first stable electron beam was successfully generated in April 2024, marking the completion of the facility’s first operational phase. This study presents the commissioning efforts for the electron beam, outlines the current operational status of the facility, and discusses future plans and the research potential of TARLA.
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers’ equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data ‘twin-friendly’ and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Fu-Sheng Yang et al 2026 Meas. Sci. Technol. 37 035003
The advancement of chip-on-wafer-on-substrate technology has positioned three-dimensional (3D) packaging as a key enabler for next-generation artificial intelligence chips, thereby sustaining the trajectory of Moore’s Law. However, the continued scaling of critical dimensions (CDs), such as through-silicon vias (TSVs) and redistribution layers (RDLs), presents major challenges for CD metrology, primarily due to the difficulty of characterizing submicron hidden structures and limited light penetration. This study introduces an optimization framework based on global sensitivity analysis (GSA) to quantify the impact of CDs on optical responses and assess interaction effects on metrology performance. By combining GSA with rigorous electromagnetic simulations, the proposed approach enhances the extraction of CD information in optical CD (OCD) metrology. In particular, this work systematically analyzes key structural parameters in both submicron silicon trench structures and copper RDLs (Cu RDLs), providing a unified evaluation across two representative classes of semiconductor architectures. Key structural parameters, including depth, top CD, sidewall angle (SWA), and trench spacing, are systematically analyzed in submicron silicon structures. Furthermore, polarization optimization guided by sensitivity analysis is applied to maximize optical response sensitivity. Experimental validation shows that the GSA-optimized scatterometry setup achieves high measurement accuracy, maintaining a bias below 2% relative to focused ion beam/scanning electron microscope benchmarks. The findings demonstrate that CDs traditionally difficult to measure, such as depth and SWA of hidden submicron microstructures, can be accurately determined. Overall, the proposed methodology significantly enhances the accuracy and robustness of OCD metrology, providing valuable insights for advancing measurement strategies in 3D semiconductor packaging.
Govind Vashishtha et al 2025 Meas. Sci. Technol. 36 022001
The growing complexity of machinery and the increasing demand for operational efficiency and safety have driven the development of advanced fault diagnosis techniques. Among these, convolutional neural networks (CNNs) have emerged as a powerful tool, offering robust and accurate fault detection and classification capabilities. This comprehensive review delves into the application of CNNs in machine fault diagnosis, covering its theoretical foundation, architectural variations, and practical implementations. The strengths and limitations of CNNs are analyzed in this domain, discussing their effectiveness in handling various fault types, data complexities, and operational environments. Furthermore, we explore the evolving landscape of CNN-based fault diagnosis, examining recent advancements in data augmentation, transfer learning, and hybrid architectures. Finally, the future research directions and potential challenges to further enhance the application of CNNs for reliable and proactive machine fault diagnosis are highlighted.
Liang Yu et al 2025 Meas. Sci. Technol. 36 122001
Advanced manufacturing, precision metrology, space exploration, and other fields increasingly rely on multi-degree-of-freedom (multi-DOF) and high-precision measurements. The demands of such measurement systems—structural complexity, systematic error control, and information decoupling—have challenged traditional interferometric techniques. Wavefront interference imaging, which integrates laser interferometry with image analysis, has emerged as an advanced technique capable of subnanometer displacement and submicroradian angular resolution using a single laser beam. This method has gained importance in multi-DOF measurement technologies because it simultaneously obtains ultra-precise multi-DOF measurements with a compact setup, strong decoupling capability, and high integrability. This review systematically examines the development of wavefront interference imaging. Beginning with physical modeling of interference fringes, it traces the evolution of representative measurement models from two-dimensional to six-DOF configurations and analyzes the potential integration of this technique with emerging deep learning–based fringe processing methods. The paper further discusses frequency and phase decoupling algorithms in both the spatial and spectral domains and summarizes recent applications of this technology to nanometric coordinate measurements, atomic force microscopy, laser leveling, and spacecraft systems. The transition of wavefront interference imaging, from single-DOF extraction to coupled modeling and real-time resolution of multi-DOFs, demonstrates the excellent system scalability and application potential of this technology. This review aims to establish a theoretical framework and developmental roadmap for wavefront interference imaging, facilitating the advancement of high-precision, high-dimensional measurement systems in related domains.
Xuechao Liu et al 2026 Meas. Sci. Technol. 37 046128
Visual-based roughness detection technology is currently an important component in the fields of mechanical processing and intelligent manufacturing. Nevertheless, the method still faces several unresolved challenges. First, the accuracy of roughness estimation is limited by the model’s capacity to characterize the surface texture features of the workpiece. Second, the identification process has certain requirements for detection speed, and is affected by the emulsion. To address these challenges, we propose a novel lightweight network VMobileNet, which aims to predict roughness with emulsion interference under turning machining. First, we propose an inverted-residual state space structure, which combines state space relationship and selective scanning mechanism to efficiently model long-range feature dependencies. Next, the visual state space module is optimized to reduce the impact of redundant information on model complexity. Third, a lightweight channel attention mechanism is introduced to avoid emulsion interference by focusing on regions with clear texture. Finally, we adopt the diffusion model for data augmentation and use it to train VMobileNet. Experimental results show that the proposed method achieves the detection accuracy of 92.6% on a single image with emulsion interference in only 0.006s. The results demonstrate this approach has high accuracy and efficiency, and can provide significant support for online roughness detection in intelligent manufacturing scenarios. The dataset is available at: https://github.com/xiaolifighting/Surface-Roughness-Dataset.
Yuxin Deng et al 2026 Meas. Sci. Technol. 37 046127
Under low-light nighttime conditions. Addressing the challenge of poor-quality nighttime building images and the difficulty in achieving high-precision fusion modeling with point cloud data under low-light conditions, this paper introduces a comprehensive solution for nighttime image enhancement and point cloud fusion modeling. We developed an unsupervised learning framework that incorporates building structural priors. Based on RetinexDIP theory, the framework integrates a structural edge enhancement module, effectively restoring details in building facades and outlines. Experimental results demonstrate that our method outperforms baseline approaches on both peak signal-to-noise ratio (12.02 dB, a 0.35 dB gain) and Information Entropy (a 0.08 increase) metrics, while also effectively preserving the integrity of building edge structures. For point cloud registration, we propose a line-feature-constrained registration (LFR) method. Utilizing a planar-and-elevation hierarchical registration strategy, LFR achieved a registration accuracy (RMSE) of 0.04–0.06 m across three test areas, showing significant improvements in accuracy and robustness over traditional methods. Fusion modeling results indicate a maximum increase of 27.6% in the number of image tie points, achieving 100% reconstruction completeness, a surface smoothness improvement to 0.85%–0.94%, and a 35%–48% reduction in overall geometric error. Multi-dimensional validation confirms that our method surpasses comparative approaches in visual quality, geometric accuracy, and model completeness, providing a reliable technical pathway for 3D reconstruction of nighttime buildings.
Zenghui Wang et al 2026 Meas. Sci. Technol. 37 046214
Medical image fusion is an important computer vision task in intelligent medical systems. It aims to integrate complementary information from different imaging modes into more information-rich representations. Although recent deep learning-based methods have demonstrated progress, they still face many challenges in handling modal inconsistency and the flexibility of adaptive fusion of multimodal features. To address these issues, in this paper, we propose an adaptive mixture-of-experts network (AMoENet) for medical image fusion. AMoENet consists of a multi-perspective feature extraction (MFE) module and an Adaptive Expert (AdEx) module. Specifically, the MFE module is built to extract the feature information in the source image. It combines Transformer and attention to effectively handle remote dependencies while enhancing the expressive ability of local and frequency-domain features. Additionally, the AdEx module is built to dynamically allocate fusion weights to each expert. This method enables the weight distribution and feature processing to adapt to each other through multi-expert collaborative optimization, which obtains information-rich fused images. Experimental results on medical datasets have demonstrated that the proposed model achieves superior performance in quantitative metrics and visual quality, along with strong generalization capabilities. The source code and implementation details are available at https://github.com/zenghui11/AMoENet.
Yang Gan et al 2026 Meas. Sci. Technol. 37 045411
Detecting aircraft targets in optical remote sensing (RS) imagery is essential for airspace management, airport operations, and defense applications. However, RS aircraft detection remains challenging due to the prevalence of small targets, dense spatial arrangements, multi-scale target variations, and complex backgrounds. To address these issues, a lightweight detection model, AR-DETR, is developed for aircraft detection in RS imagery, building upon the real-time detection Transformer framework. Firstly, a cross-aware feature enhancement module is proposed to strengthen both spatial and channel feature interactions, effectively enhancing the representation of small and weakly textured aircraft under cluttered backgrounds. Next, a dual-scale frequency fusion module is designed, which combines multi-scale depth-wise convolutions in the spatial domain with discriminative frequency selection in the frequency domain, thereby improving structural pattern extraction across different scales while suppressing background noise. Finally, an edge-Gaussian contextual enhancement module is developed to replace the conventional neck block, enabling fine-grained detail preservation and contextual enhancement through edge-guided Gaussian modulation with reduced computational cost. Results on the DOTA1.0 dataset demonstrate that AR-DETR achieves superior accuracy and efficiency compared to state-of-the-art detectors. Further validation on the Small Object Detection dAtaset -A dataset, which emphasizes small- and medium-sized aircraft, confirms the proposed model’s robustness and generalization under complex RS conditions.
Xiang Lu et al 2026 Meas. Sci. Technol. 37 045016
Roller tank lugs serve as critical guiding devices in vertical shaft hoisting systems, playing a vital role in ensuring stable operation. Currently, existing dimensional measurement methods suffer from poor real-time performance and significant errors under tilt conditions. To address this, this paper proposes a real-time detection technique for roller tank lugs based on YOLOv8. This method acquires images and LiDAR point cloud data of the roller tank lugs without disrupting normal mine operations. Through joint calibration of LiDAR and camera systems, it integrates image and point cloud data to calculate the roller tank lugs’ dimensions. Building upon this foundation, this paper further proposes an inclination-adaptive measurement algorithm for roller tank lugs. Based on the YOLOv8 object detection method, the algorithm first performs corner detection on a calibration plate parallel to the roller tank lugs. It then indirectly calculates the tilt angle using a homography matrix decomposition. Following target identification, edge detection is performed to extract the diameter cross-section. Finally, the actual dimensions of the roller tank lugs are output via the measurement algorithm. Repeated measurements validated that the maximum error of this method consistently remained below 3 mm, with a standard deviation of 0.75 mm in measurement results, demonstrating high accuracy and stability.
Qilin Qu et al 2026 Meas. Sci. Technol. 37 042002
In Industry 4.0, the integration of artificial intelligence (AI) has transformed industrial operations, with predictive maintenance (PdM) emerging as a key application. As the core of PdM, remaining useful life (RUL) prediction enables a shift from reactive or preventive maintenance to proactive, data-driven strategies, helping to optimize scheduling, reduce downtime, and cut costs. Despite the high predictive accuracy of machine learning and deep learning models in RUL prediction, their inherent ‘black-box’ nature presents significant barriers to practical industrial adoption. These barriers include compromised trust in model outputs, difficulties in debugging and error tracing, challenges in meeting regulatory compliance requirements, and limited alignment with domain-specific expertise. Consequently, explainable AI (XAI) has become indispensable for addressing these issues and enabling wider deployment of RUL prediction systems in industrial settings. This paper presents a comprehensive survey of XAI for industrial RUL prediction. The survey begins by exploring the significance of RUL prediction, challenges of black-box models, the role of XAI, and its value in enhancing the reliability of RUL methods. Subsequently, it systematically synthesizes recent research advancements, categorizes XAI techniques into five types, and traces their technical evolution. It also develops a holistic evaluation framework encompassing approaches, properties, and metrics, supported by 11 functionally grounded quantitative indicators. Moreover, it establishes a comparative analysis system on eight typical public datasets to clarify their suitability for XAI integration and industrial applications, and puts forward future research directions that combine data, physics, large language models, and digital twins to bridge the gap between academic research and industrial deployment.
Yohana Malagila Raphael et al 2026 Meas. Sci. Technol. 37 042001
The application of deep generative models (DGMs), including generative adversarial networks (GANs), diffusion models, and variational autoencoders, is rapidly transforming the field of echocardiography. These models have proven effective in addressing key challenges in cardiovascular imaging, such as improving image quality, enhancing segmentation and classification accuracy, and mitigating data scarcity. By generating large-scale, high-quality annotated datasets, DGMs enable more reliable and efficient automated cardiac assessments, which are crucial for early and accurate diagnoses. In this study, we provide a scoping review of the use of DGMs in echocardiography, examining their role in augmenting echocardiographic analysis and supporting advanced diagnostic decision-making. This work also introduces various DGM architectures and their core implementation principles, offering fundamental knowledge to professionals from fields like medicine. DGMs have shown significant promise in generating high-quality synthetic data that enhance model performance, particularly in tasks such as cardiac structure segmentation and abnormality detection. Furthermore, our review highlights that cardiac structure segmentation is the most extensively studied task, with GANs being the most widely adopted DGM type. We also identify common challenges in DGM applications and discuss emerging research directions aimed at improving model performance, the clinical relevance of generated data, and the scalability of DGMs in clinical settings. Overall, this scoping review offers a comprehensive overview of DGMs in echocardiography, identifies gaps in current evidence, and outlines future pathways toward more reliable and clinically meaningful generative approaches for cardiovascular imaging
Qi Gao et al 2026 Meas. Sci. Technol. 37 032001
As a classical non-destructive testing method, magnetic particle inspection (MPI) holds a significant position in industrial applications due to its simplicity, cost-effectiveness, and reliability. With the advancement of intelligent technologies, MPI is evolving from traditional manual visual inspection toward high-precision intelligent inspection systems by integrating automation control and intelligent analysis algorithms. This paper systematically discusses the current industrial status quo in the field of MPI, and analyses the key factors affecting the display of MPI (magnetic field strength, magnetization current, magnetization method, and magnetic powder performance). The study further explores the evolutionary trajectory of intelligent MPI technology, encompassing three core components: image acquisition systems, image processing systems, and spatial positioning systems. Finally, the work identifies persistent challenges in achieving fully intelligent MPI, while proposing future development directions focused on enhancing automation levels and optimizing inspection accuracy to advance intelligent MPI technology.
Shuai Yang and Rong Liu 2026 Meas. Sci. Technol. 37 022002
In intelligent manufacturing and industrial operations, reliable condition monitoring of rolling bearings is crucial for sustaining equipment performance, ensuring operational safety, and enabling timely fault prognosis. Under complex operating conditions and non-stationary signal environments, conventional signal processing and deep learning techniques, which typically assume Euclidean spaces, often fail to characterize intricate multisource dependencies and graph topology, limiting advancements in intelligent diagnostics. Owing to their strong capability for modeling data on non-Euclidean domains, graph neural networks (GNNs) provide a graph-based learning framework for rolling bearing fault diagnosis. Addressing the lack of a domain-specific systematic review, this study synthesizes recent advances in GNN-based approaches, classifying existing methods into three modeling pathways, namely temporal graphs, feature graphs, and spatiotemporal graphs, and analyzing their graph construction logic, feature representation strategies, and task adaptability in detail. Using four public datasets and a multi-criteria evaluation, representative methods are systematically compared in terms of classification accuracy, robustness to noise, generalization across operating conditions, and interpretability. Building on a comprehensive synthesis of methodological developments and key challenges, this review establishes a unified modeling framework of practical value and, through integrated multi-metric analyses, proposes targeted research directions to advance theoretical understanding and promote the industrial deployment of GNN-based rolling bearing diagnostic systems.
Weixuan Shao et al 2026 Meas. Sci. Technol. 37 022001
Investigating techniques for electrocardiogram (ECG) signal classification is essential in medicine, offering significant potential for the early detection and continuous assessment of cardiac disorders. This review article discusses the background context and current state of cardiovascular disease prevention and therapy, and also developments in computer-assisted ECG signal classification methods. It provides an overview of advancements in ECG signal categorization, including conventional machine learning methods, deep learning frameworks, hybrid models, and specialized methods such as flexible supervision strategies, spiking and hardware-aware neural architectures, and clinical knowledge-driven approaches. In addition, the performance and distinctive features of these models are outlined, emphasizing emerging research avenues and providing substantial technical assistance for future advancements in this field.
Ning et al
Accurate prediction of the Remaining Useful Life (RUL) of rolling bearings is crucial for the safe and efficient operation of mechanical equipment. However, most existing studies only predict the RUL in ideal or in low-noise environments, and fail to take into account the complex noise interference in real environments. Therefore, a novel multiscale dilated attention convolutional neural network (MSDA-CNN) is proposed to predict RUL of rolling bearings. In our approach, the multiscale dilation convolution module (MDCM) leverages dilated causal convolutions to capture degradation characteristics across multiple scales. Furthermore, we incorporate a dynamic attention module (DAM) into the MDCM to refine feature selection and highlight key degradation patterns. To further enhance feature representation, a selective attention fusion module (SAFM) is designed to adaptively calibrate and fuse cross-scale features while suppressing redundant and noisy information. In addition, the dynamic feature fusion module (DFFM) integrates multi-path features through convex-weighted gating instead of high-dimensional concatenation, thereby improving efficiency. Extensive experiments on the PHM 2012 bearing dataset under varying noise scenarios demonstrate that the proposed MSDA-CNN substantially outperforms several advanced models in terms of prediction accuracy and noise robustness, thereby confirming its effectiveness and superiority for industrial applications.
Lu et al
Fault diagnosis of analog circuits is crucial for enhancing the safety and reliability of power systems and reducing operation and maintenance losses. However, under noisy environments and limited fault samples, the accuracy of existing diagnostic methods often declines significantly. To overcome these limitations, this paper suggests an analog circuit fault diagnosis method that combines Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and an improved Deep Convolutional Generative Adversarial Network (DCGAN). First, CEEMDAN is employed to decompose the original circuit signals into several Intrinsic Mode Functions (IMFs). The denoised signals are then obtained by reconstructing the IMFs based on correlation coefficients. Subsequently, Short-Time Fourier Transform (STFT) is applied to the denoised signals to generate time-frequency images for extracting more comprehensive fault information. The SA-DCGAN model is created by incorporating a Self-Attention mechanism (SA) and Spectral Normalization (SN) into DCGAN to enhance model stability and feature extraction. The time-frequency images are fed into SA-DCGAN for data augmentation, effectively alleviating the issue of limited samples. Finally, a 2-D convolutional neural network (2DCNN) is used for classification. Simulation results confirm that the suggested approach has better noise resistance and diagnostic performance. Experimental results demonstrate that the proposed method achieves diagnostic accuracies of 99.36%, 97.12%, and 95.08% on three benchmark circuits, respectively, with only 20 training samples per class. Furthermore, across a signal-to-noise ratio (SNR) range of 0 dB to 100 dB, it consistently maintains the highest diagnostic accuracy.
wei et al
In the industrial field, the reliability of rotating machinery has a key impact on production safety and operation efficiency. Current fault prediction and health management methods usually rely on task-specific models, which face significant challenges when dealing with signal characteristics with different operating conditions. The existing lightweight network models have weak anti-noise ability in noisy environments and lack of cross-condition diagnosis ability and generalization. Inspired by the Transformer-CNN(convolution neural network) collaboration model, this study introduces a novel fault diagnosis model named SMAConvFormer to tackle the aforementioned challenges. First, a multi-scale channel attention embedded separable convolution is proposed to dynamically enhance the feature responses of key channels via channel attention, suppress noise-induced redundancy, and accurately capture multi-scale local receptive field features that represent the early operational stage of mechanical equipment under noisy conditions. Second, a synergistic multi-dimensional self-attention mechanism is proposed, which incorporates broadcast self-attention to model global temporal correlations, multi-scale spatial attention to capture local high-frequency impulsive features, and progressive channel self-attention to optimize channel weight allocation thereby enabling the collaborative extraction of correlation features across different frequency bands. Noise interference is effectively suppressed, and the model's diagnostic adaptability under varying operating conditions is significantly enhanced. Final, experiments show that SMAConvFormer outperforms recent fault diagnosis methods in terms of diagnostic performance and generalization ability; in addition, the effectiveness of the proposed modules are also verified.
Li et al
Modal sensitivity-based model updating has proven to be to be an effective approach for damage identification. However, the numerical ill-conditioned problem due to the large condition number of the sensitivity matrix and the uncertainties caused by inadequacies in models and noise in measurements greatly affect the performance of existing modal sensitivity-based model updating. In our early work, we have proposed a robust sparse Bayesian learning (RSBL) method, in which the mixture of Gaussians (MoGs) is employed to accurately model the real uncertainties of damage identification.RSBL effectively mitigates the impact of uncertainties on damage identification, but its performance is still limited by the ill-conditioned problem. In this paper, we find theoretically that the sensitivity matrix's condition number governs the accuracy of damage identification, and propose to reduce the sensitivity matrix's condition number by incorporating a matrix balancing in RSBL. As a result, an improved RSBL method based on modal sensitivity using matrix balancing and MoGs is proposed, and it is solved through an iterative Expectation-Maximization algorithm combined with the Laplace approximation. Extensive numerical and experimental studies show that the proposed method significantly reduces the sensitivity matrix's condition number, which in turn remarkably improves the accuracy of damage identification compared to RSBL.
H. Alashker et al
The limited availability of installation space downstream of valve-induced flow disturbances represents a critical metrological challenge, as ultrasonic flowmeters (UFM) exhibit high sensitivity to such disturbances. International standards currently overlook the impact of field installation on the accuracy of UFM. This paper presents and compares the influences of flow disturbances induced by a butterfly valve and a knife-gate valve on the performance of a clamp-on transit-time UFM. Measurements were performed at non-standard sections downstream of each valve by varying the angular position (α) of ultrasonic transducers in 30° increments around the pipe center. The tests were conducted for two distinct closings of the valve, corresponding to Reynolds numbers (Re) of 77,600 and 360,000. The main concern of this study is to evaluate and ensure that the measurements obtained in such non-standard sections are reliable and acceptable. This study further improves the accuracy of the UFM measurement based on a compensation criterion without maintaining the standard length of a pipeline. CFD models were created to provide insight the differences in installation effects. The CFD predictions show good agreement with the experimental results, with mean absolute percentage errors of 2.2% for the butterfly valve and 5.43% for the knife-gate valve. The results showed that, at the same Re, the transit-time signal stability and measurement accuracy differ noticeably between the two valve types. The findings showed that the spatial distribution of flow velocity strongly influences the performance of the UFM. The study revealed that at 9DN downstream of the disturbance element for higher Re, the correction factor (Cfac) does not exceed 1.02, where the measurement error (δ) and its repeatability uncertainty are within the permissible limits without the need to determine an optimal α. This study also provided certain values of α where the correction factor becomes minimal at distances below 9DN.
Abdulaziz Alghlaigah and Gao Min 2026 Meas. Sci. Technol. 37 045503
Data scattering occurs in impedance spectroscopy (I-S) measurement of solar cells under light illumination, which impedes reliable data fitting for photovoltaic parameter extraction. This paper reports a method that enables reliable I-S data fitting from severely scattered I-S plots. The study shows that a valid curve fitting of I-S data is possible by using a part of the data hidden in a seemingly scattered plot. Theoretical analysis confirms the validity of this approach, which reveals that a pattern emerging from the data obtained over a high frequency range represents the genuine part of the I-S curve. This new method was employed to investigate several commercial silicon solar cells. The results show that the photovoltaic parameters of solar cells under 1 sun illumination can be extracted with good repeatability from severely scattered measurement data, offering the capability of determining the dynamic properties of the solar cells, such as junction capacitance and charge carrier lifetime under a condition that is not possible previously.
Markus Ahnert and Thomas Schalk 2026 Meas. Sci. Technol.
Background: Respirometry is a widely used tool for analyzing and optimizing of biological wastewater treatment processes. However, existing respirometers are often complex and expensive, which limits their applicability in settings with limited ressources. This study presents a simple and cost-effective respirometer based on standard laboratory equipment that can be used to monitor and evaluate microbial activity in wastewater treatment plants.
Methods: The proposed methodology involves constructing a respirometer with a beaker and a magnetic stirrer. The device has two operating modes: intermittent and continuous aeration. It is suitable for open-vessel experiments. A mathematical model was adapted to estimate microbial respiration rates from oxygen consumption data while accounting for passive oxygen transfer. 
Results: The intermittent aeration mode demonstrated good reproducibility. However, the continuous aeration mode resulted in a lower error rate, suggesting that it may be more effective for detecting subtle changes in respiration activity. 
Conclusions: The respirometer described in this study is simple, cost-effective and provides a practical solution for evaluating microbial respiration in wastewater treatment plants. The methodology is highly accessible, requiring only equipment that is widely available, and it can be used for routine monitoring and model calibration. Intermittent and continuous aeration modes offer different advantages and limitations that should be considered depending on the desired application. Overall, this study underscores the importance of simplifying respirometry techniques to enhance their accessibility to researchers and practitioners in wastewater treatment.
Hai Yu et al 2026 Meas. Sci. Technol.
To improve the absolute positioning accuracy of image-type linear displacement measurement, this paper proposes a method for error compensation of reading head spacing variation in linear grating displacement measurement. Firstly, the principles behind image-type linear displacement measurement were detailed explanation. Secondly, an error model for the variation of the distance between the reading head and the scale grating was established. Thirdly, the error self correction method within the range of measurement is derived through the error model of spacing change. During the test, the spacing between the reading head and the ruler grating changed significantly through our intentional inclined installation, and succeeded in reducing error range with in a 0-240 mm range from -6.4 μm to 0.2 μm down to -1.3 μm to 1 μm. The results confirm the efficacy of the proposed method. Compared with the traditional compensation technology, this method does not need to calibrate the range error, and does not need to strictly ensure the parallelism between the reading head and the ruler grating, so as to realize the self correction of the spacing change error in real time. This research may provide a sound technical basis for further exploration of high-precision, large-scale linear displacement measurement methods.
Marinel Costel Temneanu et al 2026 Meas. Sci. Technol. 37 046122
This paper presents a novel and efficient method for accurate phase shift estimation between sinusoidal signals, based on a variable-resolution discrete Fourier transform (DFTv). Traditional fast Fourier transform (FFT)-based approaches, which rely on frequency-domain oversampling via extensive zero-padding, often incur high computational cost while remaining sensitive to spectral leakage and frequency bin misalignment. In contrast, the proposed DFTv method dynamically adjusts the frequency resolution during analysis by using the amplitude of the previously computed spectral component as a control parameter. This amplitude-dependent resolution scheme allows the algorithm to apply finer frequency steps near dominant spectral peaks and coarser steps in low-energy regions, significantly reducing the number of computations without sacrificing estimation accuracy. The effectiveness of the method is demonstrated through comparative tests against the classical zero-padded FFT technique. Additional experiments were conducted in the presence of additive white noise, showing that DFTv maintains its accuracy and robustness under noisy conditions. Due to its adaptability, low computational complexity, and resilience to noise, the proposed method is particularly well-suited for real-time phase shift estimation in resource-constrained embedded systems requiring precise sinusoidal signal analysis.
M García-Patrón et al 2026 Meas. Sci. Technol. 37 045015
The radio-frequency (RF) breakdown, or multipactor effect, is a physical phenomenon affecting RF equipment on-board spacecraft, which may result in the failure of the spatial mission. Therefore, qualification against multipactor is mandatory for space systems. In this paper, enhanced detection methods (DM) are proposed to improve the operation and outcome of the multipactor testing, supported by experimental results obtained for the L-band of global navigation satellite system (GNSS), contributing to the measurement science in high-power RF testing with the physical insight and new technical data. These tests have been performed with different coaxial and radiating devices at L-band, with over 1 kW of pulsed power and at temperatures in the range from −60 °C to 90 °C. An effective combination of various methods is presented, emphasizing the distinction between false and true multipaction events. In addition, a complementary perspective is introduced that distinguishes between primary and secondary DMs, highlighting how certain secondary techniques have proven highly effective in enhancing multipactor detection performance.
Samah A.Albdour et al 2026 Meas. Sci. Technol.
Liquid-film condensation plays a pivotal role in enhancing thermal performance and ensuring the safety of systems such as nuclear-reactor cooling loops, industrial heat exchangers, and spacecraft thermal-control networks. However, measuring film thickness and dynamics with precision remains a challenge, owing to the wide range of diagnostic methods' spatial and temporal resolutions, accuracy levels, invasiveness, costs, and adaptability. In this review, we employ a unified six-criterion framework to assess ten leading techniques; classical calorimetric and thermal probes, thin-film interferometry, infrared thermography, pulse-echo ultrasound, acoustic-emission monitoring, chromatic-confocal sensing, total-internal-reflection imaging, particle-based velocimetry, laser-induced fluorescence, X-ray tomography, and high-speed particle tracking, alongside emerging fiber-optic, capacitive, and piezoelectric sensors. We also introduce two decision-support tools: a multi-axis radar chart that visualizes each method's performance envelope and a decision-tree flowchart that matches experimental needs to optimal approaches. Our analysis highlights four critical gaps: non-invasive nanometer-scale mapping over extensive areas; real-time capture of microsecond-scale transients; simultaneous measurement of thickness, temperature, and heat flux; and reliable operation under harsh conditions. Finally, we explore the integration of machine-learning-driven data fusion with CFD, survey innovative modalities such as terahertz time-domain spectroscopy, tabletop X-ray phase-contrast imaging, and digital holographic interferometry, and outline a roadmap for next-generation, high-fidelity condensation modelling in both terrestrial and microgravity environments.
Tianyu Gao et al 2026 Meas. Sci. Technol.
In the domain of autonomous driving, 3D point cloud maps generated by light detection and ranging simultaneous localization and mapping are crucial for navigation, as they provide essential spatial constraints by leveraging static environmental features. However, residual traces of dynamic objects within these maps compromise their reliability. Existing methods for dynamic object removal often misclassify static points while eliminating dynamic ones, failing to achieve an optimal balance between preservation and deletion. To address this challenge, we propose Pandora, a novel approach that combines polar grids with adaptive binary height encoding to accurately distinguish dynamic objects while preserving high-quality static structures across diverse environments. The method employs an adaptive hierarchical strategy for point cloud localization and a bitwise matrix comparison mechanism to detect dynamic regions. Additionally, it identifies near-ground dynamic points through density analysis and incorporates a dual ray-casting module to mitigate misclassification of static points caused by occlusions. By leveraging parallel processing during per-frame comparisons, the computational overhead associated with coordinate transformations is significantly reduced. Experimental evaluation on the KITTI and Semi-indoor datasets demonstrates that Pandora achieves superior overall performance, with dynamic and static point identification accuracy reaching 98.6%, while maintaining high processing efficiency.
Shizhan Xu et al 2026 Meas. Sci. Technol.
With the continuous growth of traffic volume, many assembled small box-girder bridges have exceeded their design capacities, resulting in structural cracks and deformations. Accurately evaluating damage degrees and residual bearing capacities under operational conditions remains a significant challenge. This paper proposes a novel structural evaluation method based on the Nominal Neutral Axis (NNA). Unlike traditional indicators, the NNA represents the intrinsic nature of the cross-section and is not affected by load excitation. We objectively determine the NNA using a statistical method applied to strain data collected under stochastic traffic flow. Based on the NNA, a unique nonlinear evaluation index is established to quantify the deviation from the plane section assumption. The reliability of the index was strictly verified through finite element modeling and a targeted short-term strain monitoring test on a real bridge under both controlled static loads and actual stochastic traffic. The results demonstrate that the NNA can be accurately identified under stochastic traffic conditions. The proposed index effectively characterizes the structural state, quantifies damage extent, and assesses the remaining load-carrying capacity. Specifically, the value intervals allow for the clear distinction between the structure's elastic stage, cracked working stage, and plastic stage. This method provides a practical, reliable, and high-efficiency solution for the rapid assessment of bridge conditions without requiring traffic closure. It offers a valuable tool for transportation departments to prioritize maintenance and ensure the operational safety of aging bridge infrastructure.
Jiadong Zhang and Wei Wang 2026 Meas. Sci. Technol. 37 046307
The main challenges in object-level simultaneous localization and mapping (SLAM) lie in the representation of object geometric properties (position, rotation, scale) and surface shapes, as well as how to efficiently apply such object representation to accurate online camera tracking. Previous methods have under-exploited scene structural priors, limiting the object reconstruction quality and accuracy of localization. In this study, we propose QDSP-SLAM as a novel object-level SLAM system designed for static indoor scenes, which reconstructs objects using both dual quadrics and deep implicit shapes. We introduce the Manhattan world assumption at the object level to establish geometric constraints between dual quadrics and Manhattan planes, thereby significantly improving the accuracy of object geometric properties. Furthermore, we propose a deep implicit shape representation method integrated with dual quadrics to reconstruct object surface shapes, which leverages both category-level shape priors and scene structural priors. We redesigned the bundle-adjustment process specifically for a new object model that integrates dual quadrics and implicit shapes. We validated our approach in both simulated and real indoor scenes, demonstrating significant improvements in both modeling quality and localization accuracy compared with previous methods. The code for this study is available at https://github.com/TINY-KE/QDSP-SLAM.git.
Yusuke Sakiyama et al 2026 Meas. Sci. Technol. 37 047001
Pseudo-heterodyne scattering-type scanning near-field optical microscopy (sSNOM) is applied in the mid-infrared region to detect the chemical composition of biomolecules on the nanoscale. However, the application of sSNOM in molecular biology has been limited to static images in air. Recently, bottom illumination sSNOM (BI-sSNOM) was developed for operation in water. Yet, the scan rate of sSNOM remains a bottleneck to record protein structural changes in aqueous solution on the seconds time scale. We designed an optical and mechanical system consisting of a separate scan high-speed atomic force microscope (HS-AFM) coupled to the BI-sSNOM optics. The designed AFM scanner has a mechanical bandwidth of ca 70 kHz along the Z-axis, and ca 6 kHz along the XY-axis, equivalent to the sample scanning HS-AFM. The AFM performance is demonstrated by imaging actin filaments. The optical design is validated by sSNOM experiments on purple membranes and microtubules.
Bing Pan et al 2009 Meas. Sci. Technol. 20 062001
As a practical and effective tool for quantitative in-plane deformation measurement of a planar object surface, two-dimensional digital image correlation (2D DIC) is now widely accepted and commonly used in the field of experimental mechanics. It directly provides full-field displacements to sub-pixel accuracy and full-field strains by comparing the digital images of a test object surface acquired before and after deformation. In this review, methodologies of the 2D DIC technique for displacement field measurement and strain field estimation are systematically reviewed and discussed. Detailed analyses of the measurement accuracy considering the influences of both experimental conditions and algorithm details are provided. Measures for achieving high accuracy deformation measurement using the 2D DIC technique are also recommended. Since microscale and nanoscale deformation measurement can easily be realized by combining the 2D DIC technique with high-spatial-resolution microscopes, the 2D DIC technique should find more applications in broad areas.
Bing Pan 2018 Meas. Sci. Technol. 29 082001
This article is a personal review of the historical developments of digital image correlation (DIC) techniques, together with recent important advances and future goals. The historical developments of DIC techniques over the past 35 years are divided into a foundation-laying phase (1982–1999) and a boom phase (2000 to the present), and are traced by describing some of the milestones that have enabled new and/or better DIC measurements to be made. Important advances made to DIC since 2010 are reviewed, with an emphasis on new insights into the 2D-DIC system, new improvements to the correlation algorithm, and new developments in stereo-DIC systems. A summary of the current state-of-the-art DIC techniques is provided. Some further improvements that are needed and the future goals in the field are also envisioned.
Dongdong Liu et al 2024 Meas. Sci. Technol. 35 012002
Planetary gearboxes have various merits in mechanical transmission, but their complex structure and intricate operation modes bring large challenges in terms of fault diagnosis. Deep learning has attracted increasing attention in intelligent fault diagnosis and has been successfully adopted for planetary gearbox fault diagnosis, avoiding the difficulty in manually analyzing complex fault features with signal processing methods. This paper presents a comprehensive review of deep learning-based planetary gearbox health state recognition. First, the challenges caused by the complex vibration characteristics of planetary gearboxes in fault diagnosis are analyzed. Second, according to the popularity of deep learning in planetary gearbox fault diagnosis, we briefly introduce six mainstream algorithms, i.e. autoencoder, deep Boltzmann machine, convolutional neural network, transformer, generative adversarial network, and graph neural network, and some variants of them. Then, the applications of these methods to planetary gearbox fault diagnosis are reviewed. Finally, the research prospects and challenges in this research are discussed. According to the challenges, a dataset is introduced in this paper to facilitate future investigations. We expect that this paper can provide new graduate students, institutions and companies with a preliminary understanding of methods used in this field. The dataset can be downloaded from https://github.com/Liudd-BJUT/WT-planetary-gearbox-dataset.
Jane Hodgkinson and Ralph P Tatam 2013 Meas. Sci. Technol. 24 012004
The detection and measurement of gas concentrations using the characteristic optical absorption of the gas species is important for both understanding and monitoring a variety of phenomena from industrial processes to environmental change. This study reviews the field, covering several individual gas detection techniques including non-dispersive infrared, spectrophotometry, tunable diode laser spectroscopy and photoacoustic spectroscopy. We present the basis for each technique, recent developments in methods and performance limitations. The technology available to support this field, in terms of key components such as light sources and gas cells, has advanced rapidly in recent years and we discuss these new developments. Finally, we present a performance comparison of different techniques, taking data reported over the preceding decade, and draw conclusions from this benchmarking.
Hongfeng Tao et al 2024 Meas. Sci. Technol. 35 105023
To guarantee the stability and safety of industrial production, it is necessary to regulate the behavior of employees. However, the high background complexity, low pixel count, occlusion and fuzzy appearance can result in a high leakage rate and poor detection accuracy of small objects. Considering the above problems, this paper proposes the Enhanced feature extraction-You Only Look Once (EFE-YOLO) algorithm to improve the detection of industrial small objects. To enhance the detection of fuzzy and occluded objects, the PixelShuffle and Receptive-Field Attention (PSRFA) upsampling module is designed to preserve and reconstruct more detailed information and extract the receptive-field attention weights. Furthermore, the multi-scale and efficient (MSE) downsampling module is designed to merge global and local semantic features to alleviate the problem of false and missed detection. Subsequently, the Adaptive Feature Adjustment and Fusion (AFAF) module is designed to highlight the important features and suppress background information that is not beneficial for detection. Finally, the EIoU loss function is used to improve the convergence speed and localization accuracy. All experiments are conducted on homemade dataset. The improved YOLOv5 algorithm proposed in this paper improves [email protected] (mean average precision at a threshold of 0.50) by 2.8% compared to the YOLOv5 algorithm. The average precision and recall of small objects show an improvement of 8.1% and 7.5%, respectively. The detection performance is still leading in comparison with other advanced algorithms.
Marco Grasso and Bianca Maria Colosimo 2017 Meas. Sci. Technol. 28 044005
Despite continuous technological enhancements of metal Additive Manufacturing (AM) systems, the lack of process repeatability and stability still represents a barrier for the industrial breakthrough. The most relevant metal AM applications currently involve industrial sectors (e.g. aerospace and bio-medical) where defects avoidance is fundamental. Because of this, there is the need to develop novel in situ monitoring tools able to keep under control the stability of the process on a layer-by-layer basis, and to detect the onset of defects as soon as possible. On the one hand, AM systems must be equipped with in situ sensing devices able to measure relevant quantities during the process, a.k.a. process signatures. On the other hand, in-process data analytics and statistical monitoring techniques are required to detect and localize the defects in an automated way. This paper reviews the literature and the commercial tools for in situ monitoring of powder bed fusion (PBF) processes. It explores the different categories of defects and their main causes, the most relevant process signatures and the in situ sensing approaches proposed so far. Particular attention is devoted to the development of automated defect detection rules and the study of process control strategies, which represent two critical fields for the development of future smart PBF systems.
Bernhard Wieneke 2015 Meas. Sci. Technol. 26 074002
The uncertainty of a PIV displacement field is estimated using a generic post-processing method based on statistical analysis of the correlation process using differences in the intensity pattern in the two images. First the second image is dewarped back onto the first one using the computed displacement field which provides two almost perfectly matching images. Differences are analyzed regarding the effect of shifting the peak of the correlation function. A relationship is derived between the standard deviation of intensity differences in each interrogation window and the expected asymmetry of the correlation peak, which is then converted to the uncertainty of a displacement vector. This procedure is tested with synthetic data for various types of noise and experimental conditions (pixel noise, out-of-plane motion, seeding density, particle image size, etc) and is shown to provide an accurate estimate of the true error.
Andrea Sciacchitano and Bernhard Wieneke 2016 Meas. Sci. Technol. 27 084006
This paper discusses the propagation of the instantaneous uncertainty of PIV measurements to statistical and instantaneous quantities of interest derived from the velocity field. The expression of the uncertainty of vorticity, velocity divergence, mean value and Reynolds stresses is derived. It is shown that the uncertainty of vorticity and velocity divergence requires the knowledge of the spatial correlation between the error of the x and y particle image displacement, which depends upon the measurement spatial resolution. The uncertainty of statistical quantities is often dominated by the random uncertainty due to the finite sample size and decreases with the square root of the effective number of independent samples. Monte Carlo simulations are conducted to assess the accuracy of the uncertainty propagation formulae. Furthermore, three experimental assessments are carried out. In the first experiment, a turntable is used to simulate a rigid rotation flow field. The estimated uncertainty of the vorticity is compared with the actual vorticity error root-mean-square, with differences between the two quantities within 5–10% for different interrogation window sizes and overlap factors. A turbulent jet flow is investigated in the second experimental assessment. The reference velocity, which is used to compute the reference value of the instantaneous flow properties of interest, is obtained with an auxiliary PIV system, which features a higher dynamic range than the measurement system. Finally, the uncertainty quantification of statistical quantities is assessed via PIV measurements in a cavity flow. The comparison between estimated uncertainty and actual error demonstrates the accuracy of the proposed uncertainty propagation methodology.
Gary S Settles and Michael J Hargather 2017 Meas. Sci. Technol. 28 042001
Schlieren and shadowgraph techniques are used around the world for imaging and measuring phenomena in transparent media. These optical methods originated long ago in parallel with telescopes and microscopes, and although it might seem that little new could be expected of them on the timescale of 15 years, in fact several important things have happened that are reviewed here. The digital revolution has had a transformative effect, replacing clumsy photographic film methods with excellent—though expensive—high-speed video cameras, making digital correlation and processing of shadow and schlieren images routine, and providing an entirely-new synthetic schlieren technique that has attracted a lot of attention: background-oriented schlieren or BOS. Several aspects of modern schlieren and shadowgraphy depend upon laptop-scale computer processing of images using an image-capable language such as MATLAB™. BOS, shock-wave tracking, schlieren velocimetry, synthetic streak-schlieren, and straightforward quantitative density measurements in 2D flows are all recent developments empowered by this digital and computational capability.
Laurent Graftieaux et al 2001 Meas. Sci. Technol. 12 1422
Particle image velocimetry (PIV) measurements are made in a highly turbulent swirling flow. In this flow, we observe a coexistence of turbulent fluctuations and an unsteady swirling motion. The proper orthogonal decomposition (POD) is used to separate these two contributions to the total energy. POD is combined with two new vortex identification functions, Γ1 and Γ2. These functions identify the locations of the centre and boundary of the vortex on the basis of the velocity field. The POD computed for the measured velocity fields shows that two spatial modes are responsible for most of the fluctuations observed in the vicinity of the location of the mean vortex centre. These two modes are also responsible for the large-scale coherence of the fluctuations. The POD computed from the Γ2 scalar field shows that the displacement and deformation of the large-scale vortex are correlated to these modes. We suggest the use of such a method to separate pseudo-fluctuations due to the unsteady nature of the large-scale vortices from fluctuations due to small-scale turbulence.
Journal links
Journal information
- 1990-present
Measurement Science and Technology
doi: 10.1088/issn.0957-0233
Online ISSN: 1361-6501
Print ISSN: 0957-0233
Journal history
- 1990-present
Measurement Science and Technology - 1968-1989
Journal of Physics E: Scientific Instruments - 1923-1967
Journal of Scientific Instruments