Multiverse Computing Secures $215M to Scale AI Model Compression Technology

Multiverse Computing, a leader in quantum-inspired AI, has secured $215 million in Series B funding to scale its CompactifAI technology. Developed in 2024 and now entering wider deployment, CompactifAI significantly compresses Large Language Models (LLMs) – by up to 95% – without compromising performance. Headquartered in San Sebastian, Spain, the company’s innovation addresses the escalating costs associated with LLM infrastructure, promising to unlock broader access and applications for AI across multiple sectors. This investment round, led by Bullhound Capital with participation from prominent investors including HP Tech Ventures, aims to revolutionise the $106 billion AI inference market.

2024 Development and Initial Rollout

In 2024, Multiverse Computing developed and initially rolled out CompactifAI, its technology for compressing Large Language Models (LLMs). This culminated in successful deployments and validation of the technology. CompactifAI achieves significant compression rates while maintaining model accuracy, addressing a critical need for efficient LLM deployment. Currently, the technology supports Llama, DeepSeek, and Mistral models, with ongoing expansion to additional architectures.

Quantum-Inspired Model Compression

Multiverse Computing’s CompactifAI technology addresses the computational demands of large language models (LLMs) through a novel compression approach. Unlike traditional methods such as quantization and pruning, which often compromise model accuracy, CompactifAI achieves up to 95% compression while maintaining performance levels. This is achieved by leveraging Tensor Networks, a quantum-inspired technique for simplifying complex neural networks, significantly reducing computational requirements and inference costs, and enabling deployment across a wider range of hardware, from cloud infrastructure to edge devices like PCs and mobile phones.

Performance and Efficiency Gains

CompactifAI demonstrably enhances performance and efficiency in large language model (LLM) deployment. The technology achieves LLM compression of up to 95% while maintaining original accuracy, representing a significant reduction in computational demand. This translates to inference cost reductions of 50% to 80% and accelerates processing speeds by a factor of 4 to 12, offering substantial economic and operational benefits. These compressed models broaden the scope of LLM applicability, functioning on conventional cloud infrastructure, private data centres, and even edge devices such as PCs, mobile phones, and embedded systems like Raspberry Pi.

Strategic Investor Support

Multiverse Computing secured a €189 million ($215 million) Series B investment round, attracting a diverse group of international and strategic investors. This funding will accelerate the adoption of CompactifAI and address the substantial costs currently inhibiting widespread deployment of Large Language Models (LLMs), impacting the $106 billion AI inference market.

The investment syndicate is led by Bullhound Capital, with participation from HP Tech Ventures, SETT, Forgepoint Capital International, CDP Venture Capital, Santander Climate VC, Quantonation, Toshiba, and Capital Riesgo de Euskadi – Grupo SPRI. Bullhound Capital’s decision to lead the round reflects confidence in Multiverse’s technology and leadership, with Managing Partner Per Roman highlighting the company’s potential to address global needs for AI efficiency and contribute to European technological sovereignty.

HP Inc. invested strategically, viewing Multiverse’s approach as a means to enhance the accessibility of AI applications, particularly at the edge, and deliver benefits such as improved performance, personalization, and cost efficiency. Forgepoint Capital International similarly recognised the potential for Multiverse to become a foundational element of the AI infrastructure stack, enabling smarter, cheaper, and more sustainable AI solutions. The support from entities like SETT underscores the national strategic importance of Multiverse’s work and its alignment with broader digital transformation goals.

2024 Development and Initial Rollout

In 2024, Multiverse Computing developed and initially rolled out CompactifAI, its technology for compressing Large Language Models (LLMs). This culminated in successful deployments and validation of the technology. CompactifAI achieves significant compression rates while maintaining model accuracy, addressing a critical need for efficient LLM deployment. Currently, the technology supports Llama, DeepSeek, and Mistral models, with ongoing expansion to additional architectures.

More information
External Link: Click Here For More

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025