Based in Donostia, Spain, Multiverse Computing has introduced two new compressed versions of Llama 3.1-8B and Llama 3.3-70B models, achieving an 80% reduction in size with minimal precision loss. These compressed models, developed using their proprietary CompactifAI technology, feature 60% fewer parameters than the originals, resulting in 84% greater energy efficiency, 40% faster inference times, and a 50% cost reduction.
The technology employs quantum-inspired tensor networks to compress AI models by up to 93% while maintaining high accuracy, significantly improving the economics of AI processing. These compressed models are available via API on the CompactifAI platform and used by leading banks, telecommunications, and energy companies in beta testing.
Multiverse Computing has introduced compressed versions of Llama 3.1-8B and Llama 3.3-70B models, achieving an 80% reduction in size with minimal precision loss. This advancement is made possible through their CompactifAI tool, which employs quantum-inspired tensor networks to optimize AI model compression.
CompactifAI stands out by compressing models up to 93% while maintaining a mere 2-3% accuracy drop, compared to industry standards that typically lose 20-30% accuracy with 50-60% compression. This method enhances energy efficiency, accelerates inference speeds, and reduces operational costs.
The implications of this technology are vast. It enables AI applications in edge computing environments such as smartphones, laptops, and vehicles, as well as industrial settings like oil rigs and satellites. By overcoming the trade-off between model size and performance, Multiverse Computing is paving the way for more efficient and accessible AI solutions across various industries.
Compressed Models Offer Significant Efficiency Gains
Multiverse Computing’s compressed models substantially improve energy efficiency and computational speed. The 84% increase in energy efficiency reduces power consumption, while the 40% faster inference speeds enable quicker data processing. Additionally, operational costs are reduced by half, making AI solutions more accessible for businesses.
These gains facilitate deployment across diverse environments, including edge computing devices like smartphones and vehicles and industrial applications such as oil rigs and satellites. The enhanced efficiency ensures that even in resource-constrained settings, AI models can perform effectively, supporting critical operations with minimal overhead.
Multiverse Aims to Accelerate AI Impact
Multiverse Computing’s advancements in AI model compression aim to accelerate the deployment of AI solutions across various industries. Compressing models up to 93% with minimal accuracy loss, CompactifAI enables efficient processing on edge devices such as smartphones and vehicles, as well as in industrial settings like oil rigs and satellites. This capability allows for real-time decision-making and resource optimization in environments where computational resources are limited.
The technology’s scalability is a key factor in accelerating AI impact. CompactifAI can handle larger datasets and more complex models without compromising speed or accuracy by maintaining high performance even after significant compression. This makes it possible to implement advanced AI solutions in healthcare and autonomous systems sectors, driving innovation and efficiency across these fields.
Additionally, the reduced energy consumption resulting from CompactifAI’s optimizations contributes to a more sustainable approach to AI deployment. Lower power requirements mean AI solutions can be implemented with a smaller environmental footprint, aligning with global efforts to promote eco-friendly technologies. This dual focus on performance and sustainability positions Multiverse Computing at the forefront of accelerating meaningful change through AI.
More information
External Link: Click Here For More
