Home

Led by Ex-Google and Meta Executives, Majestic Labs Emerges From Stealth with $100M To Redefine AI Infrastructure by Tearing Down the “Memory Wall”

Majestic Labs announces Series A funding and an advanced AI server architecture that replaces multiple racks with a single server, delivering dramatic improvements in performance, power and operational efficiency

Majestic Labs, the developer of next-generation servers delivering 1000x the memory capacity of a top-of-the-line GPU for AI workloads, launched today with over $100 million in financing. Founded by ex-Google and Meta leaders, the company dramatically improves AI infrastructure with a disruptive approach to system architecture. Each Majestic all-in-one server is capable of handling the largest and most advanced AI workloads that currently require multiple racks of servers and switches.

Over the past decade, AI models have grown at an explosive rate. According to Stanford’s 2025 AI Index Report, training clusters double every five months, datasets every eight, and power usage annually. Yet the essential memory infrastructure powering these systems hasn’t kept pace. Currently, organizations are forced to overprovision expensive compute resources simply to access the memory capacity their workloads require. Majestic addresses the critical and costly imbalance between memory and compute in today’s GPU-centric AI infrastructure.

All the Compute, 1000x the Memory

Majestic’s fundamental breakthrough is an AI server that rebalances memory and compute. This new system architecture includes both hardware and software components, allowing users to hyperscale their memory capacity in familiar, user-friendly frameworks, eliminating the “memory wall” problem that stalls data flow to processors and stifles the AI industry’s growth.

“Majestic is built on a simple and powerful insight: AI's next leap forward will come from access to more powerful AI infrastructure, and more powerful AI infrastructure requires a reimagination of the memory system,” said Co-Founder and CEO Ofer Shacham. “Majestic servers will have all the compute of state-of-the-art GPU/TPU-based systems coupled with 1000x the memory. Our breakthrough technology packs the memory capacity and bandwidth of 10 racks of today’s most advanced servers into a single server, providing our customers with unprecedented gains in performance and efficiency while slashing power consumption.”

Majestic’s technology overcomes the core challenge of increasing memory demands for the latest, most powerful AI use cases:

  • Over 50x performance gains: unlocked via access to orders of magnitude more memory than GPU systems at higher bandwidth.
  • Consolidation at scale: Majestic delivers the memory capacity of ten or more racks in one server, eliminating the need to scale out for most workloads.
  • Cost and space efficiency: Fewer racks means smaller footprint, lower power, less cooling, and dramatically reduced total cost of ownership.
  • Servers for the biggest, most sophisticated workloads: Majestic is the first to enable emerging AI workloads that depend on fast access to large amounts of memory. It effortlessly supports the largest LLM models with massive context windows, extensive mixture of experts, mixture of agents, agentic AI and reasoning as well as emerging workloads like graph neural networks and extreme scale graph analytics.

“Majestic allows for a level of scalability and operational efficiency that simply isn't possible with traditional GPU based systems,” said Co-Founder and President Sha Rabii. “Our systems support vastly more users per server and shorten training time, lifting AI workloads to new heights both on-premises and in the cloud. Our customers benefit from tremendous improvements in performance, power consumption and total cost of ownership.”

Delivering AI Efficiency with World-Class Expertise

The founding team of Ofer Schacham, Masumi Reynders, and Sha Rabii are collectively responsible for building FAST (Facebook Agile Silicon Team) at Meta Reality Labs and GChips at Google. The team holds over 120 patents and has developed and shipped hundreds of millions of units of custom silicon, including the world’s first AI processors on mobile devices and the first augmented reality compute platform. The team is now combining forces for the third time, recognizing a critical opportunity to transform how organizations deploy and scale AI.

Majestic is backed by several investors, including Bow Wave Capital, who led the company’s Series A, and Lux Capital, who led the previous seed round. In addition, SBI, Upfront, Grove Ventures, Hetz Ventures, QP Ventures, Aidenlair Global and TAL Ventures participated.

“Majestic has engineered a new system for AI from the ground up, encompassing silicon, IO, packaging, and software, that is specifically tailored for the most advanced AI workloads,” said Shahin Farshchi, PhD, Partner at Lux Capital. “The team is making the most powerful AI accessible at an unprecedented scale, providing an opportunity to truly reshape how AI is delivered globally.”

“This is an extraordinary team that has identified today’s most critical constraint in AI,” said Itai Lemberger, Founder of Bow Wave Capital. “Majestic has a clear understanding of customer pain points and is focused on addressing the critical challenges in scaling AI inferencing: performance, power and efficiency. Majestic’s is the best solution we’ve seen for addressing the memory wall problem facing all AI models. Their architecture literally replaces multiple racks with a single server.”

Scaling to Meet Global AI Demand

With this Series A investment, Majestic will expand its team of AI data science, software, silicon and systems experts while advancing its product toward general availability. The funding will also support development of the full software stack and pilot deployments with customers to validate the platform's performance and cost benefits.

“AI infrastructure is scaling at unprecedented speed, but the industry has not solved key fundamental architectural inefficiencies,” said Co-Founder and COO Masumi Reynders. “Majestic addresses this by delivering immediate operational gains on today's workloads while maintaining full programmability and flexibility to adapt as AI evolves beyond transformer-based models.”

Follow Majestic Labs on LinkedIn.

About Majestic Labs

Majestic Labs builds power-efficient AI servers for the largest and most advanced AI workloads. The company's flagship server architecture collapses multiple racks of conventional equipment into a single server. Majestic’s system features custom accelerator and memory interface chips that disaggregate memory from compute, enabling up to 128 TB of extremely fast, power-efficient, high-bandwidth memory per server, nearly 100 times more than today's leading GPU servers. This breakthrough allows organizations to run massive AI models while dramatically reducing power consumption, data center footprint, and infrastructure costs. Majestic Labs' mission is to democratize access to the most advanced AI capabilities and reduce the environmental impact of AI infrastructure.

For more information, visit www.majestic-labs.ai/

Contacts