Skip to content
BitcoinBtcIcoEthTechnology

Shrinking AI for Smartphones: Spanish Company Secures $215M Investment for Miniaturized Computing in Multiverse AI

Quantum-inspired compression technology by Multiverse Computing claimed to fit large AI models like Llama-2 onto smartphones.

Quantum-motivated data compression technology from Multiverse Computing assertedly allows for AI...
Quantum-motivated data compression technology from Multiverse Computing assertedly allows for AI models, such as Llama-2, to be stored on standard smartphones.

Squeezing AI Down to Size: Multiverse Computing's Quantum-Inspired Solution

Shrinking AI for Smartphones: Spanish Company Secures $215M Investment for Miniaturized Computing in Multiverse AI

The Big SqueezeSpanish AI startup Multiverse Computing has just bagged a whopping $215 million in Series B funding, backed by industry giants like Bullhound Capital, HP Tech Ventures, and Toshiba. The reason? They've cracked the code on AI model compression, boasting a 95% reduction without compromising performance.

Shrinking AI Down to Smartphone SizeMultiverse Computing's secret weapon is CompactifAI: a compression method that borrows from the quantum world to shrink AI models down to size, making them as portable as your smartphone.

Setting the Record StraightThe company's compressed Llama-2 7B model delivers a 25% speed boost at inference while using 70% fewer parameters, with a paltry 2-3% drop in accuracy. In simple terms, it's like giving your AI a power-up without compromising its smarts.

From Traditional Compression to Quantum-InspiredTraditional compression methods like quantization, pruning, or distillation are no match for CompactifAI. It uses tensor networks – mathematical structures inspired by quantum physics – to compress models without losing the important stuff. It's like folding origami with your AI's weight matrices, retaining only significant connections while discarding unnecessary patterns.

Why Size MattersIf Multiverse Computing’s research holds true at scale, they could address a significant issue plaguing AI: models so enormous they demand specialized data centers just to operate. For the first time, they're able to profile the inner workings of a neural network to eliminate billions of spurious correlations, truly optimizing all sorts of AI models.

Understanding Tensor NetworksTensor networks help physicists keep track of particle interactions without drowning in data. CompactifAI uses this technology to tackle AI's massive model sizes, folding weight matrices into smaller, interconnected structures called Matrix Product Operators.

From Compression to HealingAfter compression, models undergo a "healing" process - quick retraining that takes less than an epoch due to reduced parameter count. The company claims this restoration process runs 50% faster than training original models thanks to decreased GPU-CPU transfer loads.

The Promise Beyond Cost SavingsBeyond cutting costs, Multiverse's technology holds promise for edge AI deployment, enabling sophisticated models to run locally instead of cloud servers. HP Tech Ventures' involvement signals interest in this realm, with Tuan Tran, HP's President of Technology and Innovation, stating, "Multiverse's innovative approach has the potential to bring AI benefits of enhanced performance, personalization, privacy, and cost efficiency to life for companies of any size."

Over timeWith the backing of key industry players, Multiverse Computing is poised to revolutionize the way we deal with AI models. They're already serving over 100 clients, including Bosch and the Bank of Canada, applying their quantum-inspired algorithms beyond AI to energy optimization and financial modeling. The Spanish government has invested €67 million, pushing total funding above $250 million. The company plans to expand its offerings to include DeepSeek R1 and other reasoning models. While it remains impossible to tinker with proprietary systems from OpenAI or Claude, the future of AI model compression looks bright with Multiverse Computing in the driver's seat.

  1. The investment in Multiverse Computing, totaling over $250 million, may pave the way for the future of AI model compression, as they aim to expand their offerings to include DeepSeek R1 and other reasoning models.
  2. For companies like Bosch and the Bank of Canada, multiverse Computing's quantum-inspired algorithms aren't just about cutting costs; they seek to bring AI benefits of enhanced performance, personalization, privacy, and cost efficiency by deploying sophisticated models locally rather than relying on cloud servers.
  3. With tensor networks, Multiverse Computing's CompactifAI method borrows from the quantum world to shrink AI models, much like retaining significant connections while discarding unnecessary patterns when folding origami with AI weight matrices. This revolutionizing technology transforms data-and-cloud-computing, artificial-intelligence, eth, and bitcoin sectors, allowing them to optimize AI models while using fewer resources.

Read also:

    Latest