Huge AI Funding Obscures Scientific Progress, Warns DeepMind’s Demis Hassabis in FT Interview

Demis Hassabis, co-founder of DeepMind, warns that the surge of funding into artificial intelligence (AI) is creating hype that obscures scientific progress. He criticizes the billions invested in generative AI start-ups, suggesting it leads to exaggeration and potential deception in a recent interview in the FT (Financial Times).

Despite this, Hassabis remains convinced of AI’s transformative potential, citing DeepMind’s AlphaFold model as proof of how AI can accelerate scientific research. He also discusses the pursuit of artificial general intelligence (AGI), suggesting it could be achieved within the next decade. DeepMind is exploring ways to improve the reliability of large language models, including a new methodology called SAFE.

The Impact of Excessive AI Funding on Scientific Progress

The influx of capital into the field of artificial intelligence (AI) has led to an overhype that is obscuring the significant scientific advancements being made, according to Sir Demis Hassabis, co-founder of DeepMind. As the CEO of Google’s AI research division, Hassabis expressed concern that the billions of dollars being funneled into generative AI start-ups and products are accompanied by an excessive amount of hype and potential deception, similar to other overhyped areas such as cryptocurrency. This overhype, he argues, is unfortunate as it obscures the phenomenal scientific research and progress being made in the field of AI.

Hassabis points out that while AI is not hyped enough in some respects, in other ways it is excessively hyped, leading to discussions about unrealistic expectations and claims. The launch of OpenAI’s ChatGPT chatbot in November 2022, for instance, sparked an investor frenzy as start-ups rushed to develop and deploy generative AI and attract venture capital funding. This rush has led to an investment of $42.5bn in 2,500 AI start-up equity rounds last year alone, according to market analysts CB Insights.

The Role of AI in Scientific Discovery and Research

Despite the misleading hype surrounding AI, Hassabis remains convinced that the technology is one of the most transformative inventions in human history. He believes that we are only beginning to scratch the surface of what will be possible over the next decade and beyond. According to Hassabis, we are potentially at the beginning of a new golden era of scientific discovery, a new Renaissance.

DeepMind’s AlphaFold model, released in 2021, serves as the best proof of concept for how AI could accelerate scientific research. AlphaFold has helped predict the structures of 200 million proteins and is now being used by more than 1 million biologists worldwide. DeepMind is also using AI to explore other areas of biology and accelerate research into drug discovery and delivery, material science, mathematics, weather prediction, and nuclear fusion technology.

The Pursuit of Artificial General Intelligence

DeepMind, founded in London in 2010, has the mission to achieve “artificial general intelligence” (AGI) that matches all human cognitive capabilities. Hassabis believes that one or two more critical breakthroughs are needed before AGI is reached. He suggests that there is a 50% chance that this could happen in the next decade. Given the potential power of AGI, Hassabis advocates for a more scientific approach to building AGI, as opposed to the hacker approach favored by Silicon Valley.

The Importance of AI Safety and Fact-Checking

Hassabis has advised the British government about the first global AI Safety Summit and welcomes the ongoing international dialogue on the subject. He believes that the creation of UK and US AI safety institutes are important first steps, but more needs to be done as the technology is exponentially improving.

DeepMind is exploring different ways of fact-checking and grounding its models by cross-checking responses against Google Search or Google Scholar. This approach is similar to how its AlphaGo model mastered the ancient game of Go by double-checking its output. A large language model could also verify whether a response made sense and make adjustments. DeepMind researchers recently released a paper outlining a new methodology, called SAFE, for reducing the factual errors, known as hallucinations, generated by large language models.

The Future of AI and Its Potential Impact

The future of AI holds immense potential, but it also comes with challenges and risks. The excessive hype and funding can obscure the real scientific progress being made and lead to unrealistic expectations. However, with a more scientific approach to building AGI and a focus on AI safety and fact-checking, AI can continue to be a transformative tool in scientific research and discovery. As Hassabis suggests, we may be at the beginning of a new golden era of scientific discovery, driven by advancements in AI.

More information
External Link: Click Here For More
Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

From Big Bang to AI, Unified Dynamics Enables Understanding of Complex Systems

December 20, 2025
Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

Xanadu Fault Tolerant Quantum Algorithms For Cancer Therapy

December 20, 2025
NIST Research Opens Path for Molecular Quantum Technologies

NIST Research Opens Path for Molecular Quantum Technologies

December 20, 2025