Groq Raises $640M in Series D

Groq Raises $640M in Series D

Groq, a leader in fast AI inference, has secured a $640 million Series D funding round, bringing the company’s valuation to $2.8 billion. This round was led by funds and accounts managed by BlackRock Private Equity Partners, with participation from existing and new investors, including Neuberger Berman, Type One Ventures, and strategic investors such as Cisco Investments, Global Brain’s KDDI Open Innovation Fund III, and Samsung Catalyst Fund. The significant demand for Groq’s vertically integrated AI inference platform is driven by developers seeking unparalleled speed.

Groq Inc. has secured a $640 million Series D

Samir Menon, Managing Director at BlackRock Private Equity Partners, expressed confidence in Groq’s market positioning, stating, “The market for AI compute is meaningful, and Groq’s vertically integrated solution is well-positioned to meet this opportunity. We look forward to supporting Groq as they scale to meet demand and accelerate their innovation further.”

Marco Chisari, Head of Samsung Semiconductor Innovation Center and EVP of Samsung Electronics, echoed this sentiment:

“Samsung Catalyst Fund is excited to support Groq. We are highly impressed by Groq’s disruptive compute architecture and their software-first approach. Groq’s record-breaking speed and near-instant Generative AI inference performance leads the market.”

Jonathan Ross, CEO and Founder of Groq, emphasized the importance of inference compute in powering AI, stating,

“You can’t power AI without inference compute. We intend to make the resources available so that anyone can create cutting-edge AI products, not just the largest tech companies. This funding will enable us to deploy more than 100,000 additional LPUs into GroqCloud. Training AI models is solved, now it’s time to deploy these models so the world can use them. Having secured twice the funding sought, we now plan to significantly expand our talent density. We’re the team enabling hundreds of thousands of developers to build on open models – and we’re hiring.”

Groq also announced the addition of Stuart Pann, a former senior executive from HP and Intel, to its leadership team as Chief Operating Officer. Pann expressed enthusiasm about joining the company at a pivotal moment, noting, “We have the technology, the talent, and the market position to rapidly scale our capacity and deliver inference deployment economics for developers as well as for Groq.”

Additionally, Groq has gained the world-class expertise of its newest technical advisor, Yann LeCun, VP & Chief AI Scientist at Meta.

Developers Flock to Groq

Groq has seen rapid growth, with over 360,000 developers now building on GroqCloud™, creating AI applications on openly available models such as Llama 3.1 from Meta, Whisper Large V3 from OpenAI, Gemma from Google, and Mixtral from Mistral. The newly secured funding will be used to scale the capacity of Groq’s tokens-as-a-service (TaaS) offering and introduce new models and features to GroqCloud.

Mark Zuckerberg, CEO and Founder of Meta, highlighted Groq’s contribution to the AI landscape in his letter titled “Open Source AI Is the Path Forward,” noting, “Innovators like Groq have built low-latency, low-cost inference serving for all the new models.”

Scaling Capacity

As generative AI applications transition from training to deployment, developers and enterprises require an inference strategy that meets the need for speed. To address this demand, Groq plans to deploy over 108,000 LPUs manufactured by GlobalFoundries by the end of Q1 2025, marking the largest AI inference compute deployment by any non-hyperscaler.

Mohsen Moazami, President of International at Groq and former leader of Emerging Markets at Cisco, is spearheading commercial efforts with enterprises and partners, including Aramco Digital and Earth Wind & Power, to build out AI compute centers globally. This initiative aims to ensure that developers have access to Groq’s technology regardless of their location.

Tareq Amin, Chief Executive Officer of Aramco Digital, commented on the collaboration, stating, “Aramco Digital is partnering with Groq to build one of the largest AI Inference-as-a-Service compute infrastructures in the MENA region. Our close collaboration with Groq is transformational for both domestic and global AI demand.”

Accelerating Innovation

Groq’s LPU™ AI inference technology is designed from the ground up with a software-first approach, tailored to meet the unique needs of AI. This strategy has given Groq an edge in bringing new models to developers quickly and at unprecedented speeds. The investment will further enable Groq to accelerate the development of the next two generations of LPU.

Morgan Stanley & Co. LLC served as the exclusive Placement Agent to Groq for this transaction.

About Groq

Groq specializes in fast AI inference technology. The Groq® LPU™ AI inference platform, a hardware and software solution, delivers exceptional AI compute speed, quality, and energy efficiency. Headquartered in Silicon Valley, Groq provides both cloud and on-premises solutions at scale for AI applications. The LPU and related systems are designed and manufactured in North America.

Read related articles:


Posted

in

by

Tags: