SambaNova has introduced its new SN50 AI chip, a processor the company claims delivers five times the speed of competing accelerators and is engineered specifically for large‑scale, low‑latency agentic AI. The launch coincides with a planned multi‑year collaboration with Intel and more than $350 million in new Series E funding, marking one of the company’s most significant expansion moves to date.
The SN50 is positioned as SambaNova’s most efficient inference chip yet, offering enterprises a three‑times lower total cost of ownership. It provides five times more compute per accelerator and four times more network bandwidth than the previous generation, while supporting clusters of up to 256 accelerators connected through a multi‑terabyte‑per‑second fabric. This architecture reduces time‑to‑first‑token, enables larger batch sizes, and supports models exceeding 10 trillion parameters with context windows above 10 million tokens. SambaNova says these capabilities allow enterprises to deploy larger, more responsive AI systems while keeping latency and operating costs under control. The chip is expected to ship later this year.
“AI is no longer a contest to build the biggest model,” said Rodrigo Liang, co‑founder and CEO of SambaNova. “With the SN50 and our deep collaboration with Intel, the real race is about who can light up entire data centers with AI agents that answer instantly, never stall, and do it at a cost that turns AI from an experiment into the most profitable engine in the cloud.”
SambaNova and Intel plan to jointly develop high‑performance, cost‑efficient AI inference solutions for enterprises, governments, and AI‑native companies. Intel also intends to make a strategic investment in SambaNova as part of the collaboration. The partnership will focus on expanding SambaNova’s AI cloud built on Intel Xeon‑based infrastructure, integrating Intel CPUs, accelerators, and networking with SambaNova systems, and executing joint go‑to‑market efforts through Intel’s global channels. The companies aim to offer a strong alternative to GPU‑centric architectures and shape the next generation of heterogeneous AI data centers.
SoftBank Corp. will be the first customer to deploy the SN50 in its next‑generation AI data centers in Japan. The chip will power low‑latency inference services for sovereign and enterprise customers across the Asia‑Pacific region, supporting both open‑source and proprietary frontier models. SoftBank already hosts SambaCloud for regional developers, and the new SN50‑based clusters will serve as the backbone for its sovereign AI initiatives and future agentic services.
SambaNova’s $350 million Series E round was led by Vista Equity Partners and Cambium Capital, with participation from Intel Capital and a wide group of global investors. Several backers highlighted the SN50’s ability to meet real‑world latency and economic requirements for agentic AI, offering a more efficient alternative to traditional GPU‑based systems. The company plans to use the new funding to expand SN50 production, scale its cloud platform, and deepen enterprise software integrations.
