The 10 Hottest AI Chip Startups Of 2020 (So Far)

As Intel made clear with its $2 billion acquisition of Habana Labs last year, the AI chip startup space is red-hot. CRN goes over 10 startups that are making big moves this year.


The Race To Accelerate AI Continues

See the latest entry: THE 10 HOTTEST AI STARTUPS OF 2022 (SO FAR)

Even in the face of a pandemic, the demand for artificial intelligence computation remains strong, whether it’s being used to accelerate COVID-19 research or fine-tune recommendation systems that many leading cloud services rely on.

That couldn’t be clearer based on the major investments semiconductor giants Intel and Nvidia have recently made in next-generation AI capabilities. While the latter is in the midst of rolling out a new game-changing data center GPU, called the A100, the former is looking at how to inject AI capabilities throughout its portfolio while making bold acquisitions like last year’s $2 billion Habana Labs deal.

Sponsored post

[Related: The 10 Hottest IoT Startups Of 2020 (So Far)]

Intel’s acquisition of Habana Labs, a startup developing high-performance chips for training and inference, was one of the most recent signs that the AI chip startup space is still red-hot as several upstarts try to reinvent the way workloads are optimized on hardware.

What follows is a roundup of the 10 hottest AI chips of 2020 so far, based on recent milestones the companies have made including funding rounds, product launches or performance records.

For more of the biggest startups, products and news stories of 2020, click here .


CEO: Dinakar Munagala

Blaize said that its Graph Streaming Processor is the first to run multiple artificial intelligence models and workflows on a single system at the same time. The El Dorado, Calif.-based startup debuted its computing architecture at CES 2020 at the beginning of the year after emerging from stealth mode last fall with $87 million from investors. With support for automotive and smart vision use cases, the startup said its Graph Streaming Processor overcomes barriers in AI processing cost and size, providing 10 to 100 times greater efficiency over existing offerings.

Cerebras Systems

CEO: Andrew Feldman

Cerebras Systems said its Wafer Scale Engine processor is the largest chip ever built, measuring at 1.2 trillion transistors and packing 400,000 compute cores. The chip was unveiled last fall at Supercomputing 2019 alongside the processor's star vehicle, the CS-1 system, which the Los Altos, Calif.-based startup calls the "world's fastest AI supercomputer." Since then, the startup has landed big deals to provide its CS-1 systems to U.S. Department of Energy's Argonne National Laboratory and the National Science Foundation's Pittsburgh Supercomputing Center.


CEO: Nigel Toon

Graphcore saif its Intelligence Processing Unit chip is the first processor designed from the ground up for machine intelligence. Unlike other processors, Graphcore said the IPU can run an entire machine- learning model inside the chip. The Bristol, U.K.-based startup in February announced a $150 million funding round from investors and, a few months later, told CNBC that it had shipped "tens of thousands" of its processors thanks to partnerships with Microsoft and Dell Technologies, the latter of which released the Dell DSS8440 last year that is equipped with 16 Graphcore IPU processors.


CEO: Jonathan Ross

Groq said its Tensor Streaming Processor provides "unparalleled agility," eliminating the tradeoff between optimal responsiveness and maximum performance that traditional GPUs suffer from. The Mountain View, Calif.-based startup debuted its TSP chip last fall, saying at the time that it was the first to deliver 1 petaop of performance on a single chip. Since then, the startup's TSP architecture has been made available on Nimbix Cloud for pay-as-you-go machine-learning processing. In January, the company said that the TSP beat other commercially available neural network architectures on the ResNet-50 v2 inference benchmark for image classification.


CEO: Orr Danon

Hailo claims its Hailo-8 deep learning chip provides data center-level performance at the edge while beating competing edge processors in size, performance and power consumption. To roll out the processor that launched last year, the Tel Aviv, Israel-based startup announced earlier this year it had raised a $60 million Series B funding round from ABB Technology Ventures, the corporate venture arm of Swiss manufacturing multinational ABB, as well as Japanese IT giant NEC Corp. The startup says the Hailo-8's structure-driven Data Flow architecture combines high performance, low power and minimal latency to provide up to 26 tera operations per second in edge devices such as smart cameras, smartphones and autonomous vehicles.


CEO: Albert Liu

Kneron is developing artificial intelligence chips for edge devices that can adapt to audio and visual recognition applications on the fly. The San Diego, Calif.-based startup announced in January that it had raised an additional $40 million for its Series A, bringing the round's total to $73 million, thanks to Horizon Ventures, Sequoia, Alibaba, Qualcomm and other ventures. The startup's KL520 is a system-on-chip that combines dual Arm Cortex M4 CPUs with Kneron's neural processing unit to provide high performance inference in low-power devices such as smart home devices. The startup is using the new funding for development and commercialization of its second-generation SoC, the KL720, which is expected to begin sampling with customers by mid-summer.


CEO: Yichen Shen

Lightelligence is using the power of light to build optical artificial intelligence chips. The startup, which has operations in the U.S. and China, reportedly raised a $26 million Series A round earlier this year from Matrix Partners China and CICC. The actual technology behind the startup's AI chips is integrated photonics, which involves using light in a similar way to how integrated circuits process and transmit electronic signals. With this, the startup said its optical chips can deliver dramatically faster performance, lower latency and lower power consumption than traditional chip architectures.

Photo by Ryuji Suzuki

SambaNova Systems

CEO: Rodrigo Liang

While SambaNova Systems isn’t alone in working on hardware and software simultaneously to propel artificial intelligence workloads, the AI chip startup said its integrated hardware and software offering stands out in the crowd because of its reconfigurable dataflow architecture. The Palo Alto, Calif.-based startup said this architecture allows applications to take the lead in driving how hardware is optimized to accelerate performance in data centers and at the edge. In February, the startup announced that it had raised a $250 million Series C funding round from Intel Capital, BlackRock and other investors to further accelerate its software capabilities.

CEO: Krishna Rangasayee said its Machine Learning System-on-Chip, or MLSoC for short, is the first chip to combine high performance, low power and hardware security for machine-learning inference. The San Jose, Calif.-based startup said its SoC is designed to be environmentally friendly and efficient, capable of delivering frames per second per watt at a rate that is 30 times greater than what competing offerings can accomplish. To accelerate production and customer delivery, the startup raised a $30 million Series A funding round, announced in May, that was led by Dell Technologies Capital.


CEO: Kurt Busch

Syntiant is developing artificial intelligence chips that are purpose-built for voice applications at the edge. The Irvine, Calif.-based startup began shipping its Neural Decision Processors globally at the beginning of the year with the expectation that the first DFP-embedded consumer products will be available before July. The startup’s NDP chips provide always-on deep learning processing for voice and other sensor applications in a wide range of battery-powered devices, from earbuds and laptops to mobile phones and smart speakers. Syntiant said that its NDP100 and NDP101 chips consume less than 140 microwatts and deliver 200 times greater efficiency and 20 times higher throughput in comparison to low-power microcontroller unit solutions.