High-resolution reconstruction of flow-field data from low-resolution and noisy measurements is of interest due to the prevalence of such problems in experimental fluid mechanics, where the measurement data are in general sparse, incomplete and noisy. Deep-learning approaches have been shown suitable for such super-resolution tasks. However, a high number of high-resolution examples is needed, which may not be available for many cases. Moreover, the obtained predictions may lack in complying with the physical principles, e.g. mass and momentum conservation. Physics-informed deep learning provides frameworks for integrating data and physical laws for learning. In this study, we apply physics-informed neural networks (PINNs) for super-resolution of flow-field data both in time and space from a limited set of noisy measurements without having any high-resolution reference data. Our objective is to obtain a continuous solution of the problem, providing a physically-consistent prediction at any point in the solution domain. We demonstrate the applicability of PINNs for the super-resolution of flow-field data in time and space through three canonical cases: Burgers' equation, two-dimensional vortex shedding behind a circular cylinder and the minimal turbulent channel flow. The robustness of the models is also investigated by adding synthetic Gaussian noise. Furthermore, we show the capabilities of PINNs to improve the resolution and reduce the noise in a real experimental dataset consisting of hot-wire-anemometry measurements. Our results show the adequate capabilities of PINNs in the context of data augmentation for experiments in fluid mechanics.
Purpose-led Publishing is a coalition of three not-for-profit publishers in the field of physical sciences: AIP Publishing, the American Physical Society and IOP Publishing.
Together, as publishers that will always put purpose above profit, we have defined a set of industry standards that underpin high-quality, ethical scholarly communications.
We are proudly declaring that science is our only shareholder.
ISSN: 1361-6501
Launched in 1923 Measurement Science and Technology was the world's first scientific instrumentation and measurement journal and the first research journal produced by the Institute of Physics. It covers all aspects of the theory, practice and application of measurement, instrumentation and sensing across science and engineering.
Open all abstracts, in this tab
Hamidreza Eivazi et al 2024 Meas. Sci. Technol. 35 075303
Simon Laflamme et al 2023 Meas. Sci. Technol. 34 093001
Structural health monitoring (SHM) is the automation of the condition assessment process of an engineered system. When applied to geometrically large components or structures, such as those found in civil and aerospace infrastructure and systems, a critical challenge is in designing the sensing solution that could yield actionable information. This is a difficult task to conduct cost-effectively, because of the large surfaces under consideration and the localized nature of typical defects and damages. There have been significant research efforts in empowering conventional measurement technologies for applications to SHM in order to improve performance of the condition assessment process. Yet, the field implementation of these SHM solutions is still in its infancy, attributable to various economic and technical challenges. The objective of this Roadmap publication is to discuss modern measurement technologies that were developed for SHM purposes, along with their associated challenges and opportunities, and to provide a path to research and development efforts that could yield impactful field applications. The Roadmap is organized into four sections: distributed embedded sensing systems, distributed surface sensing systems, multifunctional materials, and remote sensing. Recognizing that many measurement technologies may overlap between sections, we define distributed sensing solutions as those that involve or imply the utilization of numbers of sensors geometrically organized within (embedded) or over (surface) the monitored component or system. Multi-functional materials are sensing solutions that combine multiple capabilities, for example those also serving structural functions. Remote sensing are solutions that are contactless, for example cell phones, drones, and satellites. It also includes the notion of remotely controlled robots.
Liisa M Hirvonen and Klaus Suhling 2017 Meas. Sci. Technol. 28 012003
Time-correlated single photon counting (TCSPC) is a widely used, robust and mature technique to measure the photon arrival time in applications such as fluorescence spectroscopy and microscopy, LIDAR and optical tomography. In the past few years there have been significant developments with wide-field TCSPC detectors, which can record the position as well as the arrival time of the photon simultaneously. In this review, we summarise different approaches used in wide-field TCSPC detection, and discuss their merits for different applications, with emphasis on fluorescence lifetime imaging.
Adam Thompson et al 2021 Meas. Sci. Technol. 32 105013
Maximum permissible errors (MPEs) are an important measurement system specification and form the basis of periodic verification of a measurement system's performance. However, there is no standard methodology for determining MPEs, so when they are not provided, or not suitable for the measurement procedure performed, it is unclear how to generate an appropriate value with which to verify the system. Whilst a simple approach might be to take many measurements of a calibrated artefact and then use the maximum observed error as the MPE, this method requires a large number of repeat measurements for high confidence in the calculated MPE. Here, we present a statistical method of MPE determination, capable of providing MPEs with high confidence and minimum data collection. The method is presented with 1000 synthetic experiments and is shown to determine an overestimated MPE within 10% of an analytically true value in 99.2% of experiments, while underestimating the MPE with respect to the analytically true value in 0.8% of experiments (overestimating the value, on average, by 1.24%). The method is then applied to a real test case (probing form error for a commercial fringe projection system), where the efficiently determined MPE is overestimated by 0.3% with respect to an MPE determined using an arbitrarily chosen large number of measurements.
Martin Kögler and Bryan Heilala 2020 Meas. Sci. Technol. 32 012002
Time-gated (TG) Raman spectroscopy (RS) has been shown to be an effective technical solution for the major problem whereby sample-induced fluorescence masks the Raman signal during spectral detection. Technical methods of fluorescence rejection have come a long way since the early implementations of large and expensive laboratory equipment, such as the optical Kerr gate. Today, more affordable small sized options are available. These improvements are largely due to advances in the production of spectroscopic and electronic components, leading to the reduction of device complexity and costs. An integral part of TG Raman spectroscopy is the temporally precise synchronization (picosecond range) between the pulsed laser excitation source and the sensitive and fast detector. The detector is able to collect the Raman signal during the short laser pulses, while fluorescence emission, which has a longer delay, is rejected during the detector dead-time. TG Raman is also resistant against ambient light as well as thermal emissions, due to its short measurement duty cycle.
In recent years, the focus in the study of ultra-sensitive and fast detectors has been on gated and intensified charge coupled devices (ICCDs), or on CMOS single-photon avalanche diode (SPAD) arrays, which are also suitable for performing TG RS. SPAD arrays have the advantage of being even more sensitive, with better temporal resolution compared to gated CCDs, and without the requirement for excessive detector cooling. This review aims to provide an overview of TG Raman from early to recent developments, its applications and extensions.
A Sciacchitano 2019 Meas. Sci. Technol. 30 092001
Particle image velocimetry (PIV) has become the chief experimental technique for velocity field measurements in fluid flows. The technique yields quantitative visualizations of the instantaneous flow patterns, which are typically used to support the development of phenomenological models for complex flows or for validation of numerical simulations. However, due to the complex relationship between measurement errors and experimental parameters, the quantification of the PIV uncertainty is far from being a trivial task and has often relied upon subjective considerations. Recognizing the importance of methodologies for the objective and reliable uncertainty quantification (UQ) of experimental data, several PIV-UQ approaches have been proposed in recent years that aim at the determination of objective uncertainty bounds in PIV measurements.
This topical review on PIV uncertainty quantification aims to provide the reader with an overview of error sources in PIV measurements and to inform them of the most up-to-date approaches for PIV uncertainty quantification and propagation. The paper first introduces the general definitions and classifications of measurement errors and uncertainties, following the guidelines of the International Organization for Standards (ISO) and of renowned books on the topic. Details on the main PIV error sources are given, considering the entire measurement chain from timing and synchronization of the data acquisition system, to illumination, mechanical properties of the tracer particles, imaging of those, analysis of the particle motion, data validation and reduction. The focus is on planar PIV experiments for the measurement of two- or three-component velocity fields.
Approaches for the quantification of the uncertainty of PIV data are discussed. Those are divided into a-priori UQ approaches, which provide a general figure for the uncertainty of PIV measurements, and a-posteriori UQ approaches, which are data-based and aim at quantifying the uncertainty of specific sets of data. The findings of a-priori PIV-UQ based on theoretical modelling of the measurement chain as well as on numerical or experimental assessments are discussed. The most up-to-date approaches for a-posteriori PIV-UQ are introduced, highlighting their capabilities and limitations.
As many PIV experiments aim at determining flow properties derived from the velocity fields (e.g. vorticity, time-average velocity, Reynolds stresses, pressure), the topic of PIV uncertainty propagation is tackled considering the recent investigations based on Taylor series and Monte Carlo methods. Finally, the uncertainty quantification of 3D velocity measurements by volumetric approaches (tomographic PIV and Lagrangian particle tracking) is discussed.
Guanglin Chen et al 2024 Meas. Sci. Technol. 35 086202
Multi rotor unmanned aerial vehicles (UAVs) are extensively utilized across various domains, and the motor constitutes a pivotal element in the UAV power system. The majority of UAV failures and crashes stem from motor malfunctions, underscoring the imperative need for comprehensive research on fault diagnosis in UAV motors to ensure the stable and reliable execution of flight tasks. This study focuses on quadrotor UAVs as the research subject and devises targeted fault simulation experiments based on the structural features and operational characteristics of the DC brushless motor used in quadrotor UAVs, specifically examining the stator, rotor, and bearings. To address challenges related to the UAV's own loads, limited space for redundant parts, and the high cost and difficulty associated with installing sensors for traditional fault diagnostic signals such as vibration and temperature, this study opts to use current signals as a substitute. This approach resolves the issue of challenging data collection for UAVs and investigates a current signal based fault diagnosis method for UAV motors. Lastly, in response to the limited training samples available for fault data due to the UAV's highly sensitive characteristics regarding the health status of its components and flight stability, traditional machine learning and deep learning methods encounter difficulties in identifying representative features with a small number of training samples, leading to the risk of overfitting and reduced model accuracy in fault diagnosis. To overcome this challenge, we propose a hybrid neural network fault diagnosis model that incorporates a width learning system and a convolutional neural network (CNN). The width learning system eliminates temporal characteristics from the original current signal, capturing more comprehensive and representative sample features in the width feature space. Subsequently, the CNN is employed for feature extraction and classification tasks. In empirical small sample fault diagnosis experiments using current signal data for UAV motors, our proposed model outperforms other models used for comparison.
Louise Wright and Stuart Davidson 2024 Meas. Sci. Technol. 35 051001
Digital twinning is a rapidly growing area of research. Digital twins combine models and data to provide up-to-date information about the state of a system. They support reliable decision-making in fields such as structural monitoring and advanced manufacturing. The use of metrology data to update models in this way offers benefits in many areas, including metrology itself. The recent activities in digitalisation of metrology offer a great opportunity to make metrology data 'twin-friendly' and to incorporate digital twins into metrological processes. This paper discusses key features of digital twins that will inform their use in metrology and measurement, highlights the links between digital twins and virtual metrology, outlines what use metrology can make of digital twins and how metrology and measured data can support the use of digital twins, and suggests potential future developments that will maximise the benefits achieved.
Gustavo Quino et al 2021 Meas. Sci. Technol. 32 015203
Digital image correlation (DIC) is a widely used technique in experimental mechanics for full field measurement of displacements and strains. The subset matching based DIC requires surfaces containing a random pattern. Even though there are several techniques to create random speckle patterns, their applicability is still limited. For instance, traditional methods such as airbrush painting are not suitable in the following challenging scenarios: (i) when time available to produce the speckle pattern is limited and (ii) when dynamic loading conditions trigger peeling of the pattern. The development and application of some novel techniques to address these situations is presented in this paper. The developed techniques make use of commercially available materials such as temporary tattoo paper, adhesives and stamp kits. The presented techniques are shown to be quick, repeatable, consistent and stable even under impact loads and large deformations. Additionally, they offer the possibility to optimise and customise the speckle pattern. The speckling techniques presented in the paper are also versatile and can be quickly applied in a variety of materials.
Thomas Engel 2023 Meas. Sci. Technol. 34 032002
The field of optical 3D metrology is gaining significant interest in the past years. Optical sensors can probe the geometry of workpieces and biological samples very fast, highly accurate and without any tactile physical contact to the object's surface. In this respect, optical sensors are a pre-requisite for many applications in the big trends like Industrial Internet of Things, Industry 4.0 or Medicine 4.0. The interest for optical 3D metrology is shifting from a metrology for quality assurance in industrial production to "digitize the real world" to facilitate a precise digital representation of an object or an environment for documentation or as input data for virtual applications like digital fab or augmented reality. The aspiration to digitize the world necessitates fast and efficient contact free sensing principles of appropriate accuracy for solid and even soft objects with a variety of colour, surface texture and lighting conditions. This review article tries to give a concise conceptual overview about the evolution of a broad variety of optical measurement principles that evolved and gained some importance in the field of 3D metrology for industrial 3D applications and their related technological enablers.
Open all abstracts, in this tab
Yinchu Tian et al 2024 Meas. Sci. Technol. 35 086124
Fast kurtogram (FK) is an efficient method for processing non-stationary signals, widely recognized by scholars as a rapid and effective approach for fault diagnosis. However, it has limitations in distinguishing between periodic pulse and random interference pulses due to the drawbacks in its frequency band segmentation methods and the inherent shortcomings of the kurtosis index itself. To address this, this paper proposes a fault feature extraction method based on the maximum envelope spectrum power function-based Gini indices (PFGI2) and empirical wavelet transform. This method, inspired by the concept of FK, constructs a series of band-pass filters following the principles of empirical wavelet decomposition. It applies envelope spectrum analysis to a series of sub-bands and calculates the PFGI2 value for each, to identify the optimal sub-band. The effectiveness of the proposed method is validated through simulations of vibration signals and experimental data.
Xiancong Shi et al 2024 Meas. Sci. Technol. 35 085013
For camera calibration with large field of view (FOV), the accuracy is impacted by the size of the calibration targets. Considering the challenges associated with producing and carrying large targets, existing methods resort to using small targets to achieve calibration without high accuracy. In order to solve this problem, a photogrammetric calibration method based on the fixed multi-plane targets (FMPT) is proposed in this paper for cameras with large FOV. The FMPT is composed by multiple identical small planar targets (PTs), with fixed pose transformation relationships among PTs. The proposed calibration method involves the following main steps. Firstly, the camera is moved several times to capture a series of images of FMPT. Secondly, the fixed pose transformation matrices of different small PTs in FMPT are calculated through the initial parameters of camera. Finally, calibration based on photogrammetry is conducted, with a continuous optimization of camera intrinsic and extrinsic parameters based on the multiple 2D and 3D constraint in the FMPT. Simulation and real data experiments show that the calibration accuracy using the proposed method is much higher than that using a small target, and sightly higher than that using a large target. Furthermore, the experiments demonstrate the robust stability of this method, maintaining high calibration accuracy even in the presence of increased noise and error in target production.
Yang Zhao et al 2024 Meas. Sci. Technol. 35 086313
Aiming at the problems of poor positioning accuracy in the indoor and outdoor junction areas and the loss of positioning signals and discontinuous positioning results during the transition from the indoor area to the outdoor area, this paper proposes a machine learning method to solve the positioning problem in the indoor and outdoor junction areas and the switching problem of positioning methods. An indoor and outdoor positioning switching algorithm based on Particle Swarm Optimization-Back Propagation (PSO-BP) and BP neural network is designed. Through this algorithm, the position of the positioning tag can be judged independently and the coordinates of the positioning tag in the indoor and outdoor junction area can be predicted independently. The experimental results show that the accuracy of positioning area judgment based on PSO-BP neural network can reach 99.91%. The indoor and outdoor boundary area coordinates obtained by BP neural network prediction method are lower than the root mean square error of ultra wide band indoor positioning method and BDS positioning method. The algorithm model proposed in this paper effectively improves the positioning problem of the indoor and outdoor junction area and the positioning problem of the transition from the indoor area to the outdoor area, improves the positioning accuracy and positioning stability of the indoor and outdoor junction area, reduces the positioning cost, and has strong practicability.
Fangfang Liu et al 2024 Meas. Sci. Technol. 35 085116
To solve the cross-sensitivity problem affecting optical fiber sensors and realize multiparameter measurement, a microfiber Fabry–Perot interferometer (MFPI) is proposed and experimentally demonstrated. Simultaneous measurement of two distinct physical parameter (temperature and strain) is realized by monitoring wavelength and reflectivity of MFPI. In the temperature field range of 22 °C–36 °C, the maximum temperature sensitivity can reach 12 pm °C−1. The maximum strain sensitivity is up to 0.8 pm/μ in the strain range of 0–800 μ. In the simultaneous measurement experiments, the relative errors of temperature and strain were 4.0% and 0.8%, respectively. Furthermore, the sensing element used in this method was just a single fiber grating sensor without any coating layer, which demonstrated the significant advantage of the proposed method in reducing the complexity and cost of multiparameter measurement.
L Illés et al 2024 Meas. Sci. Technol. 35 085206
The study aimed to develop a measurement apparatus for in vivo chlorophyll-a (Chl-a) fluorescence decay measurements of plants by means of time correlated single photon counting. In this approach, sub-nanosecond laser pulses with a repetition rate of 10 MHz are applied to excite the sample, followed by the analysis of arrival times of the emitted fluorescence photons. Photon statistics are generated by iteratively fitting the sum of two exponential functions. The tool was tested on both plastid and in vivo leaf samples of Savoy cabbage (Brassica oleracea var. sabauda) with 3–4 subsequent leaves giving a complete sample coverage starting from the outermost. The Chl-a fluorescence lifetime exhibited a gradual increase in both the isolated plastid suspensions and the in vivo leaf samples towards the innermost leaf layers explained by an increase of natural absence of light (etiolation syndrome). Furthermore, cadmium stress and iron deficiency were investigated on treated sugar beet (Beta vulgaris) samples in vivo using TCSPS measurements. The reduced fluorescence quenching resulted in an increased fluorescence lifetime. Finally, a long-term (10 week) testing of the setup was carried out on Chl-retaining resurrection Haberlea rhodopensis plants protecting themselves by an elevated non-photochemical quenching yielding a decrease of fluorescence lifetime during their desiccation.
Open all abstracts, in this tab
Hanlin Guan et al 2024 Meas. Sci. Technol. 35 082001
Hydraulic component faults have the characteristics of nonlinear time-varying signal, strong concealment, and difficult feature extraction, etc. Timely and accurately fault diagnosis of hydraulic components is helpful to curb economic losses and accidents, so researches have carried out a lot of research on hydraulic components. Information fusion technology can combine multi-source data from multiple dimensions to mine fault data features, which effectively improves the accuracy and reliability of fault diagnosis results. However, there is currently a lack of a comprehensive and systematic review in this domain. Therefore, in this paper, the hydraulic components information fusion fault diagnosis technologies are summarized and analyzed, encompassing the main process information fusion fault diagnosis and the research status of information fusion fault diagnosis of hydraulic system. The methods and techniques involved in the fusion process, data source and fusion method of fault diagnosis of hydraulic components information fusion are elaborated and summarized. The problems of information fusion in fault diagnosis of hydraulic components are analyzed, the solutions are discussed, and the research ideas of improving information fusion fault diagnosis are put forward. Finally, digital twin (DT) technology is introduced, and the advantages and research status of intelligent fault diagnosis based on DT are summarized. On this basis, the intelligent fault diagnosis of hydraulic components based on information fusion is summarized, and the challenges and future research ideas of applying information fusion and DT to intelligent fault diagnosis of hydraulic components are put forward and analyzed comprehensively.
Xin Li et al 2024 Meas. Sci. Technol. 35 072002
The health condition of rolling bearings has a direct impact on the safe operation of rotating machinery. And their working environment is harsh and the working condition is complex, which brings challenges to fault diagnosis. With the development of computer technology, deep learning has been applied in the field of fault diagnosis and has rapidly developed. Among them, convolutional neural network (CNN) has received great attention from researchers due to its powerful data mining ability and feature adaptive learning ability. Based on recent research hotspots, the development history and trend of CNN is summarized and analyzed. Firstly, the basic structure of CNN is introduced and the important progress of classical CNN models for rolling bearing fault diagnosis in recent years is studied. The problems with the classic CNN algorithm have been pointed out. Secondly, to solve the above problems, combined with recent research achievements, various methods and principles for optimizing CNN are introduced and compared from the perspectives of deep feature extraction, hyperparameter optimization, network structure optimization. Although significant progress has been made in the research of fault diagnosis of rolling bearings based on CNN, there is still room for improvement and development in addressing issues such as low accuracy of imbalanced data, weak model generalization, and poor network interpretability. Therefore, the future development trend of CNN networks is discussed finally. And transfer learning models are introduced to improve the generalization ability of CNN and interpretable CNN is used to increase the interpretability of CNN networks.
Victor H R Cardoso et al 2024 Meas. Sci. Technol. 35 072001
This work addresses the historical development of techniques and methodologies oriented to the measurement of the internal diameter of transparent tubes since the original contributions of Anderson and Barr published in 1923 in the first issue of Measurement Science and Technology. The progresses on this field are summarized and highlighted the emergence and significance of the measurement approaches supported by the optical fiber.
Weiqing Liao et al 2024 Meas. Sci. Technol. 35 062002
Mechanical fault diagnosis is crucial for ensuring the normal operation of mechanical equipment. With the rapid development of deep learning technology, the methods based on big data-driven provide a new perspective for the fault diagnosis of machinery. However, mechanical equipment operates in the normal condition most of the time, resulting in the collected data being imbalanced, which affects the performance of mechanical fault diagnosis. As a new approach for generating data, generative adversarial network (GAN) can effectively address the issues of limited data and imbalanced data in practical engineering applications. This paper provides a comprehensive review of GAN for mechanical fault diagnosis. Firstly, the development of GAN-based mechanical fault diagnosis, the basic theory of GAN and various GAN variants (GANs) are briefly introduced. Subsequently, GANs are summarized and categorized from the perspective of labels and models, and the corresponding applications are outlined. Lastly, the limitations of current research, future challenges, future trends and selecting the GAN in the practical application are discussed.
Jianghong Zhou et al 2024 Meas. Sci. Technol. 35 062001
Predictive maintenance (PdM) is currently the most cost-effective maintenance method for industrial equipment, offering improved safety and availability of mechanical assets. A crucial component of PdM is the remaining useful life (RUL) prediction for machines, which has garnered increasing attention. With the rapid advancements in industrial internet of things and artificial intelligence technologies, RUL prediction methods, particularly those based on pattern recognition (PR) technology, have made significant progress. However, a comprehensive review that systematically analyzes and summarizes these state-of-the-art PR-based prognostic methods is currently lacking. To address this gap, this paper presents a comprehensive review of PR-based RUL prediction methods. Firstly, it summarizes commonly used evaluation indicators based on accuracy metrics, prediction confidence metrics, and prediction stability metrics. Secondly, it provides a comprehensive analysis of typical machine learning methods and deep learning networks employed in RUL prediction. Furthermore, it delves into cutting-edge techniques, including advanced network models and frontier learning theories in RUL prediction. Finally, the paper concludes by discussing the current main challenges and prospects in the field. The intended audience of this article includes practitioners and researchers involved in machinery PdM, aiming to provide them with essential foundational knowledge and a technical overview of the subject matter.
Open all abstracts, in this tab
Choudhary et al
Deep learning has made significant contributions to the medical field and has shown great potential in various applications. Its ability to process vast amounts of data and extraction of patterns has enabled breakthroughs in medical research, diagnosis, and treatment. The application of deep learning plays a vital role in depression detection. Depression is a neurological disorder characterized by persistent feelings of sadness, hopelessness, and a lack of interest. The prevalence of depression is a significant factor contributing to the rise in suicide cases on a global scale. The electroencephalogram (EEG) is a non-invasive technique used to detect depression. It records brain activity using multiple electrodes. The number of EEG electrodes used for measurement directly affects the instrumentation and measurement complexity of the experiment. The present manuscript proposes a deep learning model for depression detection, focusing on two electrodes named FP1 and FP2. The purpose of employing two electrodes is to enhance the system's portability while reducing data acquisition time and system cost. EEG is spatio-temporal data and possesses inherent spatial and temporal features. The present manuscript proposes a methodology for extracting temporal and spatial features. The temporal feature extraction module extracts temporal features in the time domain, and the spatial module extracts spatial features in the spatial domain. This manuscript presents a study on the applicability of two electrodes for depression detection. This research can enhance accessibility, user-friendliness, and easier data collection and analysis. The proposed deep learning model is evaluated on two benchmark datasets. It achieves 93.41% classification accuracy, 92.54% precision, 93.23% recall, 93.06% F1 score, and 97.80% AUC for HUSM dataset and for MODMA dataset it achieves 79.40% accuracy, 81.18% precision, 67.73% recall, 73.80% F1 score, and 85.66% AUC.
Zhong et al
Noisy image segmentation is a hot topic in image analysis. In this paper, we present a novel methodology for tackling this issue through the integration of fractional differentiation in the frequency domain with a variational level set model, which eliminates user-selected initial contours by incorporating the convex energy function. Additionally, the fractional differentiation reduces noises while preserving more detail information. Experiments on synthetic and real noisy images demonstrate that our proposed model surpasses other denoising variational level set models in terms of noise reduction, segmentation accuracy and efficiency.
Wang et al
In order to refine the displacement field of the background oriented schlieren method, a super-resolution method based on deep learning has been newly proposed in the study, and were compared with the bicubic interpolation. The gradient loss functions were firstly introduced into the hybrid downsampled skip-connection/multi-scale (DSC/MS) model to improve the reconstruction effect. The reconstruction effects of the new loss functions were compared with that of the traditional MSE loss function. The results showed that the Laplace operator with average pooling had better performance than the origin loss function in all the indexes including PSNR, MSE, MSE of the gradient and the max MSE. In these four indexes, MSE of the gradient and the max MSE performed especially better than the others, where the MSE of the gradient reduced from 3.90E-05 to 3.30E-05 and the max MSE reduced from 0.392 to 0.360.
Hellmich et al
A scalable wafer-based fabrication process for a new generation of 3D standards enabling the 3D calibration of optical microscopes is presented and validated. The 3D standards are based on step pyramids with several layers in the µm range for height calibration and a system of cylindrical knops distributed across the layers as marks for lateral calibration. This enables calibration for the three coordinate axes and the orthogonality error between them in a single measurement step. The requirements necessary for such a calibration, as optical non-transparency, reproducible flatness of the pyramid step heights and the lowest possible deviations of the lateral marks coordinates, are met by optimising the manufacturing process: The deviation of the height steps distributed over the wafer is ± 3.6 nm and is primarily caused by the layer deposition processes. The lateral manufacturing accuracy was determined using calibrated SEM and show a mean deviation of 20 or 60 nm, depending on the lateral size of the structures. The EBL process and the level of inaccuracy of the SEM standard have an influence on the lateral scaling accuracy. Based on the tactilely generated height values and the coordinates of the mark determined by a calibrated SEM, an example calibration of a CLSM was successfully performed and showed good conformity to conventional calibration techniques.
Gefan et al
Real-time monitoring of slopes, tunnels, and dams is important for ensuring the long-term stability and reliability of such structures. Despite the successes of current technologies in many applications, a gap still exists in certain areas such as precision in steep slopes and complex soil conditions. This study has designed a flexible inclinometer based on an array of Micro Electromechanical System (MEMS) sensors to enhance the accuracy and flexibility of existing monitoring techniques. The inclination angle of each flexible inclinometer measurement unit was measured to monitor the horizontal or vertical displacement of the target structural body. We used the Levenberg–Marquardt (LM) algorithm for optimizing the MEMS sensors based calibration and designed multiple experiments to test the accuracy of the proposed method. Experimental results show that the calibrated flexible inclinometer measurement unit has an inclination angle of less than 0.04° and the accuracy of the flexible inclinometer lies within ±0.4 mm in the horizontal attitude and 1.6 mm in the vertical attitude. Our research has developed a novel tool for geotechnical engineering monitoring that can aid in increasing the precision of real-time assessment and prediction of structural stability.
Open all abstracts, in this tab
Celina Bozena Hellmich et al 2024 Meas. Sci. Technol.
A scalable wafer-based fabrication process for a new generation of 3D standards enabling the 3D calibration of optical microscopes is presented and validated. The 3D standards are based on step pyramids with several layers in the µm range for height calibration and a system of cylindrical knops distributed across the layers as marks for lateral calibration. This enables calibration for the three coordinate axes and the orthogonality error between them in a single measurement step. The requirements necessary for such a calibration, as optical non-transparency, reproducible flatness of the pyramid step heights and the lowest possible deviations of the lateral marks coordinates, are met by optimising the manufacturing process: The deviation of the height steps distributed over the wafer is ± 3.6 nm and is primarily caused by the layer deposition processes. The lateral manufacturing accuracy was determined using calibrated SEM and show a mean deviation of 20 or 60 nm, depending on the lateral size of the structures. The EBL process and the level of inaccuracy of the SEM standard have an influence on the lateral scaling accuracy. Based on the tactilely generated height values and the coordinates of the mark determined by a calibrated SEM, an example calibration of a CLSM was successfully performed and showed good conformity to conventional calibration techniques.
Junrong Zhang et al 2024 Meas. Sci. Technol.
We analyze a smooth pressure solver based on the "modified Poisson equation": ∇2p +ξ2∂2p/∂t2 = f(u(t)), where p is the pressure field, u(t) is the velocity field measured by time-resolved image velocimetry, and ξ2 is a tunable parameter to control the solver's diffusive behavior in time. This modified Poisson equation aims at obtaining smooth pressure fields from potentially noisy image velocimetry measurements, and is a part of the current four-dimensional (4D) pressure solver (implemented in, for example, DaVis 10.2) by LaVision. This work focuses on investigating three aspects of the "modified Poisson equation": smoothing effect, error propagation, and drift in time. We first provide rigorous analysis and validate that this solver can sufficiently smooth the computed pressure field by setting a large enough ξ2. However, a large value of ξ2 may cause large errors in the reconstructed pressure fields. Then we introduce an upper bound on the error in the reconstructed pressure fields to quantify the error propagation dynamics. Finally, we discuss the potential drift due to the partitioning in time, which is an optional strategy used in LaVision's current 4D pressure solver to reduce computational costs. Our analysis and validation not only show that careful choice of the parameters (e.g., ξ2) is needed for smooth and accurate pressure field reconstruction but provide theoretical guidelines for parameter tuning when similar pressure solvers are used for time-resolved image velocimetry data.
Meng Zhang 2024 Meas. Sci. Technol.
Rolling bearing fault diagnosis is crucial for ensuring the safe and reliable operation of mechanical equipment. Detecting faults directly from measurement signals is challenging due to severe noise and interference. Blind deconvolution, as a preferred method, effectively recovers periodic pulses from the measured vibration signals of faulty bearings. This study introduces a simulated annealing-based blind deconvolution approach to enhance the pulse signal components reflecting faults in vibration signals measured on rolling bearings. This method iteratively searches for the optimal coordinates in a high-dimensional orthogonal optimization space, where the optimal coordinates reflect the combination of the inverse filter coefficients. Compared to the generalized spherical optimization space used in the "Optimization-Blind Deconvolution" method in previous works, the proposed finite high-dimensional optimization space helps overcome the problem of inverse filter coefficient convergence, allowing for the design of inverse filters without limit of its shape. To better accommodate the cyclostationarity characteristics of bearing signal measured in reality, the proposed method employs a target vector that allows for uncertainty in pulse occurrence moments, thus overcomes challenges introduced by pseudo-periodic phenomena resulting from bearing slippage. Numerical simulations and experimental results on real bearing vibration signals confirm that the proposed method can design more flexible filters to enhance pulse-like patterns in signals, effectively utilize limited filter resources. Its capacity to tolerate inaccurate fault period estimates, high background noise, and pulse randomness enables it to effectively address vibration measurement signals in real-world scenarios.
Boyao Liu and William (Bill) Allison 2024 Meas. Sci. Technol.
We describe a compact constant current power supply with μA precision designed to drive coils. The unit generates currents from -125 mA to 125 mA with a load up to 10 Ω using a precision 16-bit Digital to Anaolgue Converter (DAC), driven from a microcontroller (e.g. Raspberry Pi Pico). All power for the unit is derived from the 5 V of the microcontroller. As a demonstration of the capability of the power supply, it was applied to spin manipulation in a helium spin echo system.
Josef L Richmond et al 2024 Meas. Sci. Technol. 35 085502
Quantifying vacuum-ultraviolet (VUV) fluxes typically requires vacuum-compatible spectrometers and is often associated with significant cost and effort. A simple technique for the absolute measurement of local VUV fluxes from plasmas using the photoemission from a set of coated metal plates, is described. The radiant power from a 13.56 MHz hydrogen plasma operating at 40–87 mTorr and with an radio frequency (RF) input power from 100 to 120 W was investigated by irradiating a set of 2 cm diameter Au, Ag and Cu plates. The variation in photoemission currents was compared with the photoelectric yield curves to estimate the absolute flux incident on the surfaces in the 113–190 nm range. The measured fluxes were found to have an uncertainty of 5%–30% when compared with the VUV spectrometer measurements. The VUV output power was found to have a maximum at a pressure of 70–80 mTorr and to increase with RF power. In all cases, the VUV output power was measured to be approximately 12%–16% of the RF input power to the matching network, in good agreement with spectroscopy results.
Caroline Dorszewski et al 2024 Meas. Sci. Technol.
Analyzing the airflow around wind turbines during operation requires an in-process-capable measurement approach that functions without modification of the rotor blade. Infrared–thermographic flow visualization is such a measurement approach. However, its measurement capabilities on wind turbines in operation are highly weather-dependent. Therefore, to understand the expected flow visualization quality in non-laboratory conditions, the dependence of the achievable contrast and contrast-to-noise ratio (CNR) of the laminar-turbulent transition on solar radiation and air temperature is studied, respectively. 
A linear dependence of the contrast on the absorbed solar radiation is derived as a first estimation from a theoretical study of the heat balance. while the air temperature variations are shown to have no effect under certain conditions. The slope of the linear dependence of about 0.025 m2K/W was validated by experiments. To further study the fundamental measurability limit, only the camera noise with constant variance is here applied to determine the achievable contrast-to-noise ratio, which is thus directly proportional to the contrast. 
As a result, the achievable contrast and CNR for visualizing the laminar-turbulent flow transition over the year, over the day, and for different yaw angles of the wind turbine are determined. For this investigation, a wind turbine location near in northern Germany, is assumed as an example, and a maximal achievable contrast and CNR of 4.2 K and 122, respectively, are estimated, which agree with previous measurements. The presented method applies to any other wind turbine location and thus enables planning thermographic flow measurements on any wind turbine in the world.
Jaqueline Stauffenberg et al 2024 Meas. Sci. Technol. 35 085011
This paper explores large area application of tip-based nanofabrication by field emission scanning probe lithography and showcases the simultaneous possibility of atomic force microscopy on macroscopic scales. This is made possible by the combination of tip-based technology and a planar nanopositioning and nanomeasuring machine. Using long range atomic force microscopy measurement of regular grating structures, the performance of the machine is thoroughly characterized over the full 100 mm range of motion of the positioning machine, which was confirmed by repeated measurements. After initially focussing on achieving the minimum line width of 40 nm in microscopic areas, a grating with a pitch of 1 μm is additionally fabricated over a total length of 10 mm, whereby the dimensions and deviations are also considered.
Ines Fortmeier et al 2024 Meas. Sci. Technol. 35 085012
High-quality aspherical and freeform surfaces are in high demand, and the high-accuracy form measurement of such surfaces is a challenging task. To explore the current status of form measurement systems for complex surfaces such as aspheres and freeforms, interlaboratory comparison measurements are performed. This study presents the pseudonymized results obtained using three different surfaces (metal asphere, glass asphere, toroidal surface) in a total of six different round robins. These results were taken from a total of 13 different measurement instruments based on 9 different measurement principles and operated at 12 different laboratories. They were analyzed using a sophisticated procedure that was first developed in 2018 and then refined and tested on simulated data in 2022 to address the challenges of such a comparison at this level of accuracy. In the current study, we applied these refined methods todata acquired from tactile and optical point measurements as well as from optical areal measurements. As there are no absolutely measured and very well characterized reference standard aspherical and freeform surfaces available at the accuracy level of a few tens of nanometers root-mean-square, the approximated true forms of the surfaces were derived from the measurements and indicate the manufacturing accuracy of the surface forms. Then, the measurement's differences to the approximated true forms were analyzed, which directly indicate the systematic measurement errors of the instruments. By also comparing the approximated true forms from the two different round robins for each surface, additional insights into the reliability and stability of these so-called virtual reference topographies were gained.
Bruno Eckmann et al 2024 Meas. Sci. Technol. 35 085010
Scanning microwave microscopy (SMM) is a combination of an atomic force microscope with a vector network analyzer (VNA) to measure locally resolved impedances. The technique finds application in the realms of semiconductor industries, material sciences, or biology. To determine quantitative material properties from the measured impedances, the system must be calibrated. Transferring the calibration from the calibration substrate onto the material under test is strongly limited when using unshielded probes, as the electromagnetic coupling to the surroundings can reach several centimeters. This work reports the fabrication of coaxially shielded probes for a scanning microwave microscope and their integration into such an instrument. We discuss a calibration method with dielectric references, using a simulation-assisted 1-port VNA calibration algorithm. Uncertainty considerations of the measurement process are included and propagation throughout the algorithm is performed. The calibration is verified with an additional dielectric reference. As an application example, the results for a static-random-access memory sample are presented. We identified system-related drift and trace noise as the dominant contributors to the uncertainties of the calibrated results. The here presented shielded tips can broaden the application scope of SMM, as they are door-openers for measurements in liquids.
Mohammadmahdi Abedi et al 2024 Meas. Sci. Technol.
This study investigates the synergistic effects of cement, water, and hybrid carbon nanotubes/graphene nanoplatelets (CNT/GNP) concentrations on the mechanical, microstructural, durability, and piezoresistive properties of self-sensing cementitious geocomposites. Varied concentrations of cement (8% to 18%), water (8% to 16%), and CNT/GNP (0.1% to 0.34%, 1:1) were incorporated into cementitious stabilized sand (CSS). Mechanical characterization involved compression and flexural tests, while microstructural analysis utilized dry density, apparent porosity, water absorption, and non-destructive ultrasonic testing, alongside TGA, SEM, EDX, and XRD analyses. The durability of the composite was also assessed against 180 Freeze-thaw cycles. Moreover, the piezoresistive behaviour of the nano-reinforced CSS was analyzed during cyclic flexural and compressive loading using the four-probe method. The optimal carbon nanomaterials (CNM) content was found to depend on the water and cement ratios. Generally, elevating the water content led to a rise in the CNM optimal concentration, primarily attributed to improved dispersion and adequate water for the cement hydration process. The maximum increments in flexural and compressive strengths, compared to plain CSS, were significant, reaching up to approximately 30% for flexural strength and 41% for compressive strength, for the specimen containing 18% cement, 12% water, and 0.17% CNM. This improvement was attributed to the nanoparticles' pore-filling function, acceleration of hydration, regulation of free water, and facilitation of crack-bridging mechanisms in the geocomposite. Further decreases in cement and water content adversely impacted the piezoresistive performance of the composite. Notably, specimens containing 8% cement (across all water content variations) and 10% cement (with 8% and 12% water content) showed a lack of piezoresistive responses. In contrast, specimens containing 14% and 18% cement displayed substantial sensitivity, evidenced by elevated gauge factors, under loading conditions.