Optimised Step Function Surpasses AlphaEvolve’s Result with 575 Intervals.

Researchers demonstrate a nonnegative step function comprising 575 equally spaced intervals which satisfies the equation presented, exceeding a prior result achieved by DeepMind’s AlphaEvolve utilising 50 intervals. This function is generated through gradient-based methods, differing from the techniques employed in the earlier study.

The pursuit of optimal functions satisfying specific mathematical inequalities continues to refine theoretical limits and challenge computational methods. Recent work focuses on autoconvolution inequalities, which describe relationships between a function and its repeated convolutions with itself. Boyer and Li, working independently, present a nonnegative step function comprising 575 equally spaced intervals that demonstrably improves upon a prior solution developed by DeepMind’s AlphaEvolve, which utilised a function with only 50 intervals. Their approach, detailed in “An improved example for an autoconvolution inequality”, leverages gradient-based optimisation techniques, differing from the evolutionary algorithms employed by AlphaEvolve, and represents a notable advancement in this area of mathematical analysis.

Researchers have developed a nonnegative step function, comprising 575 equally spaced intervals, that represents an improvement in function optimisation compared with existing work. A nonnegative step function is a piecewise constant function that only takes non-negative values, and is defined by a series of steps or intervals. This function demonstrably satisfies a specific inequality, exceeding a previously established benchmark achieved by DeepMind’s AlphaEvolve. AlphaEvolve previously identified a similar function utilising only 50 intervals for a comparable inequality, indicating a substantial increase in both the efficiency and precision of function approximation within the defined parameters.

The methodology diverges from AlphaEvolve’s reliance on evolutionary algorithms, instead employing gradient-based methods. Gradient descent is a first-order iterative optimisation algorithm used to find the minimum of a function. It works by repeatedly adjusting parameters in the direction of the steepest descent, effectively minimising the function’s value. In this instance, gradient descent optimises the step function to satisfy the specified inequality. This alternative approach demonstrates the potential for diverse computational strategies in achieving optimal solutions within mathematical optimisation problems.

Researchers are currently exploring the limits of optimisation achievable with nonnegative step functions and investigating the scalability of the gradient-based method to more complex inequalities. This work promises further advancements in the field, with future research focused on extending these techniques to more challenging problems and exploring the theoretical properties of the resulting solutions. This will contribute to a deeper understanding of the underlying mathematical principles governing these functions.

The investigation centres on the properties of these step functions and their relationship to areas within number theory, specifically concerning the distribution and characteristics of numbers. While the abstract does not explicitly detail the broader theoretical implications, the improvement in the number of intervals suggests a refinement in understanding the constraints governing such functions. This suggests a more precise mapping between the function’s parameters and the inequality it must satisfy.

The findings contribute to the ongoing exploration of mathematical functions and their applications, potentially impacting areas such as additive number theory, which studies the properties of sums of numbers. The successful application of gradient-based methods underscores the utility of these techniques in uncovering complex mathematical solutions, providing a viable alternative to algorithmic search strategies that rely on random exploration and selection.

👉 More information
🗞 An improved example for an autoconvolution inequality
🧠 DOI: https://doi.org/10.48550/arXiv.2506.16750

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

NIST CAISI Issues Request for Information on Securing AI Agent Systems

NIST CAISI Issues Request for Information on Securing AI Agent Systems

January 14, 2026
Honeywell Backed Quantinuum Pursues Public Offering via SEC Filing

Honeywell Backed Quantinuum Pursues Public Offering via SEC Filing

January 14, 2026
Materials Project Cited 32,000 Times, Accelerating Battery & Quantum Computing

Materials Project Cited 32,000 Times, Accelerating Battery & Quantum Computing

January 14, 2026