Accurate measurement of fluid viscosity is crucial for optimising industrial processes and enabling fully automated laboratories, but current techniques often prove intrusive and struggle outside of carefully controlled settings. Jongwon Sohn, Juhyeon Moon, and Hyunjoon Jung, alongside Jaewook Nam, all from Seoul National University, now present a novel vision-based viscometer that overcomes these limitations. Their system infers viscosity by analysing the distortion of a background pattern as light passes through a mixing fluid, offering a non-invasive method for real-time monitoring. The researchers demonstrate high accuracy in both direct viscosity measurement and classification, achieving a mean absolute error of just 0. 113, and importantly, incorporate uncertainty quantification to ensure reliable predictions, paving the way for robust automation in diverse industrial applications.
Fluid Measurement and Control via Vision
Research in fluid dynamics and rheology increasingly focuses on advanced measurement and control techniques leveraging computer vision and deep learning. Scientists are developing innovative methods for characterizing fluid properties like viscosity and controlling fluid behavior in diverse applications, ranging from food science to biopharmaceutical manufacturing. This work moves beyond traditional experimental approaches, offering new possibilities for real-time monitoring and precise control of fluid processes. Investigations cover fundamental aspects of fluid dynamics, including free surface flows and the behavior of fluids under rotation.
Studies also explore the relationship between fluid viscosity, concentration, temperature, and shear rate, seeking to understand how these factors influence fluid behavior. Researchers are also investigating fluid instabilities and the patterns that emerge in fluid flows, providing insights into complex fluid phenomena. Traditional viscosity measurement techniques are being complemented by microfluidic rheology, which offers advantages in terms of sample volume and control. Inline viscometry provides methods for measuring viscosity within flowing process streams, enabling real-time monitoring of industrial processes.
However, a growing trend involves utilizing computer vision and deep learning to estimate viscosity directly from visual data, offering a non-invasive and potentially more versatile approach. Techniques like Background-Oriented Schlieren (BOS) are employed to visualize fluid flow patterns, while computer vision algorithms reconstruct fluid surfaces from images and videos. These methods are being applied to a wide range of applications, including identifying liquid properties through image analysis and controlling fluid behavior in robotic systems. The integration of robotics allows for precise fluid handling and control, particularly in applications like automated pouring and manufacturing processes.
This research has significant implications for various industries, including food science, where understanding fluid properties is crucial for product development and quality control. In biopharmaceuticals, controlling viscosity is essential for formulating protein-based drugs. The oil and gas industry benefits from improved understanding of heavy oil flow, while manufacturing processes like injection molding rely on precise viscosity control for optimal performance.
Fluid Viscosity from Surface Pattern Distortion
Scientists have pioneered a new stand-off viscometer, ViscNet, that determines fluid viscosity by analyzing how a fixed background pattern distorts as light refracts through a mixing fluid. This innovative approach moves beyond traditional mechanical instruments, offering a non-invasive method for measuring viscosity without direct fluid contact or the need for controlled laboratory conditions. Researchers harnessed the natural dynamics of fluid mixing, recognizing that viscous dissipation modulates surface geometry in a way that encodes viscosity information. The system illuminates the mixing fluid with a patterned background and captures the resulting distortion with a camera.
This distortion, caused by light refraction, serves as an optical probe directly related to viscosity. To improve performance for fluids with similar viscosity values, scientists implemented a multi-pattern strategy, enriching the visual cues used for analysis. A large-scale video dataset was constructed, combining real experimental recordings with simulated data generated through particle-based fluid simulations and photorealistic rendering, providing a comprehensive training set for the vision system. ViscNet utilizes a modified Video Vision Transformer (ViViT) architecture to predict viscosity directly from video sequences of the distorted background pattern.
This architecture extracts features that correlate with viscosity, enabling accurate predictions. Experiments demonstrate the system achieves a mean absolute error of 0. 113 in log m 2 s -1 units for regression and reaches up to 81% accuracy in viscosity-class prediction. Furthermore, the system incorporates uncertainty quantification, providing confidence estimates alongside viscosity predictions, ensuring reliable and robust measurements in diverse conditions.
Viscosity Measurement via Non-Contact Vision System
Scientists have developed a novel vision-based viscometer that measures fluid viscosity without physical contact, making it suitable for real-world process monitoring. The system achieves a mean absolute error of 0. 113 in log m 2 s -1 units when predicting viscosity using regression analysis, demonstrating high accuracy in quantifying fluid thickness. Furthermore, the system correctly predicts viscosity class with 81% accuracy, indicating its ability to categorize fluids based on their flow properties. While performance decreases for fluids with very similar viscosities, a multi-pattern strategy improves robustness by providing richer visual information for analysis.
The breakthrough lies in the system’s ability to infer viscosity by analyzing distortions in a fixed background pattern as light refracts through the fluid’s surface. Experiments revealed that the degree of distortion directly correlates with the fluid’s viscosity, allowing for non-invasive measurement. To ensure reliable measurements, the team incorporated uncertainty quantification, enabling the system to provide viscosity predictions alongside confidence estimates, crucial for applications requiring dependable data, such as automated laboratory processes and industrial quality control. The research team implemented a deep neural network, ViscNet, based on a Video Vision Transformer (ViViT) architecture to analyze video footage of the fluid’s surface.
By embedding impeller rotation speed as a conditioning signal, the model effectively decouples input energy from viscosity-dependent decay characteristics. The network outputs a Gaussian mixture model, allowing for quantification of predictive uncertainty and assessment of measurement reliability. This innovative approach delivers a practical, automation-ready alternative to traditional viscometry methods.
Viscosity Mapping via Refraction Distortion Analysis
This research demonstrates a novel vision-based viscometer that accurately infers fluid viscosity by analyzing distortions in a fixed background pattern as light refracts through a stirred fluid. The system successfully estimates viscosity across a range of values, achieving a mean absolute error of 0. 113 in logarithmic units and up to 81% accuracy in viscosity classification. This method offers a practical, non-invasive alternative to traditional viscometry, functioning remotely and eliminating the need for controlled laboratory environments. The team acknowledges that performance diminishes when distinguishing between viscosity classes that are very close in value, highlighting a limitation in resolving subtle differences. To address this, they implemented a multi-pattern strategy, which enhances robustness by providing richer visual information. Furthermore, the incorporation of uncertainty quantification allows for viscosity predictions accompanied by confidence estimates, increasing the reliability of the measurements.
👉 More information
🗞 ViscNet: Vision-Based In-line Viscometry for Fluid Mixing Process
🧠 ArXiv: https://arxiv.org/abs/2512.01268
