The challenge of efficiently learning from sequential data drives innovation in machine learning, and researchers are now exploring the power of combining random features with the dynamics of differential equations. Francesco Piatti, Thomas Cass, and William F. Turner, all from Imperial College London, present a new framework that leverages large, randomly-parameterised differential equations as ‘reservoirs’ to transform complex time-series data into more manageable representations. This approach significantly reduces the computational burden of training, as only a final processing layer requires adjustment, and importantly, it establishes a theoretical link between random feature methods, continuous-time neural networks, and sophisticated path-signature analysis. The team’s work, encompassing both Random Fourier CDEs and Random Rough DEs, delivers competitive performance on standard benchmarks, offering a practical and scalable alternative to traditional signature-based methods for analysing time-series data.
Researchers engineered large, randomly parameterized CDEs to function as continuous-time reservoirs, effectively mapping input paths into complex representations, and crucially, only trained a linear readout layer, enabling fast and scalable performance with a strong inductive bias. The method achieves a practical alternative to explicit signature computations, preserving the inductive bias inherent in signature methods while capitalizing on the efficiency of random features, and the team made code publicly available to facilitate further research and application.
Random Differential Equations for Time-Series Learning
Scientists have developed a new training-efficient framework for time-series learning that combines random features with controlled differential equations, achieving competitive or state-of-the-art performance on benchmark datasets. This work introduces a method where large, randomly parameterized differential equations function as continuous-time reservoirs, transforming input data into rich representations, with only a linear readout layer requiring training. R-RDE also performs well, particularly on datasets with complex structures. The computational complexity of these models scales linearly with sequence length, a significant advantage over traditional kernel baselines. Further investigation reveals that RF-CDE maintains strong performance even with a limited number of random features, while R-RDE excels at extracting fine-grained information from irregular time-series data.
Random Features and Controlled Differential Equations
This research presents a novel framework for time-series learning that effectively combines random features with controlled differential equations, offering a computationally efficient approach to capturing complex temporal dynamics. This design achieves scalability and speed while incorporating a strong inductive bias suitable for time-series analysis.
The key achievement lies in demonstrating, through mathematical analysis, that these random-feature reservoirs converge to well-established kernel methods in the limit of infinite width, specifically the RBF-lifted signature kernel and the rough signature kernel. This unification provides a theoretical foundation linking random features, continuous-time deep learning, and path-signature theory, and allows for practical alternatives to direct signature computations. The authors acknowledge that their theoretical analysis currently relies on the assumption of Gaussian matrices, but suggest the conclusions extend to a broader range of ensembles. Future research will explore applications to more complex datasets and investigate adapting the framework to other types of sequential data.
👉 More information
🗞 Random Controlled Differential Equations
🧠 ArXiv: https://arxiv.org/abs/2512.23670
