Huge AI Funding Obscures Scientific Progress, Warns DeepMind’s Demis Hassabis in FT Interview

Demis Hassabis, co-founder of DeepMind, warns that the surge of funding into artificial intelligence (AI) is creating hype that obscures scientific progress. He criticizes the billions invested in generative AI start-ups, suggesting it leads to exaggeration and potential deception in a recent interview in the FT (Financial Times).

Despite this, Hassabis remains convinced of AI’s transformative potential, citing DeepMind’s AlphaFold model as proof of how AI can accelerate scientific research. He also discusses the pursuit of artificial general intelligence (AGI), suggesting it could be achieved within the next decade. DeepMind is exploring ways to improve the reliability of large language models, including a new methodology called SAFE.

The Impact of Excessive AI Funding on Scientific Progress

The influx of capital into the field of artificial intelligence (AI) has led to an overhype that is obscuring the significant scientific advancements being made, according to Sir Demis Hassabis, co-founder of DeepMind. As the CEO of Google’s AI research division, Hassabis expressed concern that the billions of dollars being funneled into generative AI start-ups and products are accompanied by an excessive amount of hype and potential deception, similar to other overhyped areas such as cryptocurrency. This overhype, he argues, is unfortunate as it obscures the phenomenal scientific research and progress being made in the field of AI.

Hassabis points out that while AI is not hyped enough in some respects, in other ways it is excessively hyped, leading to discussions about unrealistic expectations and claims. The launch of OpenAI’s ChatGPT chatbot in November 2022, for instance, sparked an investor frenzy as start-ups rushed to develop and deploy generative AI and attract venture capital funding. This rush has led to an investment of $42.5bn in 2,500 AI start-up equity rounds last year alone, according to market analysts CB Insights.

The Role of AI in Scientific Discovery and Research

Despite the misleading hype surrounding AI, Hassabis remains convinced that the technology is one of the most transformative inventions in human history. He believes that we are only beginning to scratch the surface of what will be possible over the next decade and beyond. According to Hassabis, we are potentially at the beginning of a new golden era of scientific discovery, a new Renaissance.

DeepMind’s AlphaFold model, released in 2021, serves as the best proof of concept for how AI could accelerate scientific research. AlphaFold has helped predict the structures of 200 million proteins and is now being used by more than 1 million biologists worldwide. DeepMind is also using AI to explore other areas of biology and accelerate research into drug discovery and delivery, material science, mathematics, weather prediction, and nuclear fusion technology.

The Pursuit of Artificial General Intelligence

DeepMind, founded in London in 2010, has the mission to achieve “artificial general intelligence” (AGI) that matches all human cognitive capabilities. Hassabis believes that one or two more critical breakthroughs are needed before AGI is reached. He suggests that there is a 50% chance that this could happen in the next decade. Given the potential power of AGI, Hassabis advocates for a more scientific approach to building AGI, as opposed to the hacker approach favored by Silicon Valley.

The Importance of AI Safety and Fact-Checking

Hassabis has advised the British government about the first global AI Safety Summit and welcomes the ongoing international dialogue on the subject. He believes that the creation of UK and US AI safety institutes are important first steps, but more needs to be done as the technology is exponentially improving.

DeepMind is exploring different ways of fact-checking and grounding its models by cross-checking responses against Google Search or Google Scholar. This approach is similar to how its AlphaGo model mastered the ancient game of Go by double-checking its output. A large language model could also verify whether a response made sense and make adjustments. DeepMind researchers recently released a paper outlining a new methodology, called SAFE, for reducing the factual errors, known as hallucinations, generated by large language models.

The Future of AI and Its Potential Impact

The future of AI holds immense potential, but it also comes with challenges and risks. The excessive hype and funding can obscure the real scientific progress being made and lead to unrealistic expectations. However, with a more scientific approach to building AGI and a focus on AI safety and fact-checking, AI can continue to be a transformative tool in scientific research and discovery. As Hassabis suggests, we may be at the beginning of a new golden era of scientific discovery, driven by advancements in AI.

More information
External Link: Click Here For More
Dr. Donovan

Dr. Donovan

Dr. Donovan is a futurist and technology writer covering the quantum revolution. Where classical computers manipulate bits that are either on or off, quantum machines exploit superposition and entanglement to process information in ways that classical physics cannot. Dr. Donovan tracks the full quantum landscape: fault-tolerant computing, photonic and superconducting architectures, post-quantum cryptography, and the geopolitical race between nations and corporations to achieve quantum advantage. The decisions being made now, in research labs and government offices around the world, will determine who controls the most powerful computers ever built.

Latest Posts by Dr. Donovan:

Specialized AI hardware accelerators for neural network computation

Anthropic’s Compute Capacity Doubles: 1,000+ Customers Spend $1M+

April 7, 2026
QCNNs Classically Simulable Up To 1024 Qubits

QCNNs Classically Simulable Up To 1024 Qubits

April 7, 2026
Bell states representing maximally entangled quantum bit pairs

Bell Nonlocality Connected To Integrable Quantum Systems

April 7, 2026