Spanish AI startup Multiverse Computing released HyperNova 60B on Hugging Face this week, claiming performance gains over Mistral's comparable models. The company, which specializes in quantum-inspired optimization techniques, says the new version demonstrates improved reasoning and coding capabilities across standard benchmarks.
The release marks Multiverse Computing's entry into the competitive large language model arena, where it faces established players including Mistral AI, Meta, and a growing field of open-source alternatives. The company has not disclosed full benchmark results or independent verification of its performance claims.
Technical Specifications
HyperNova 60B is a 60-billion-parameter model available under an open license on Hugging Face, the popular model repository. Multiverse Computing states the model was trained using proprietary optimization methods derived from the company's quantum computing research, though the exact training methodology remains undisclosed.
The model targets enterprise applications requiring complex reasoning, including financial analysis, scientific computing, and code generation. At 60 billion parameters, it sits in the mid-range of current large language models - smaller than Meta's Llama 3.1405B but larger than many specialized models.
Market Context
Mistral AI, the French startup valued at $6 billion as of December 2024, has established itself as Europe's leading AI model developer. Its flagship models power applications across finance, healthcare, and enterprise software. Any credible challenge to Mistral's performance would represent a significant development in the European AI sector.
Multiverse Computing has not published side-by-side comparisons or specified which Mistral model serves as the benchmark. The company's previous work focused on quantum algorithms for portfolio optimization and drug discovery rather than large language models, making this release a strategic pivot.
What Comes Next
Independent researchers and enterprise users will test HyperNova 60B against established models in coming weeks. The model's actual performance on domain-specific tasks will determine whether it gains traction beyond initial release attention.
Multiverse Computing has indicated plans for additional model releases but has not provided a roadmap or timeline. The company raised $27 million in Series A funding in 2023, according to Crunchbase.