The 10 Coolest AI Chip Startups Of 2020

These AI chip startups are approaching the market with a variety of approaches to improve performance, efficiency and cost for AI, deep learning and machine learning workloads.

AI Chip Startups Place Their Bets

See the latest entry: THE 10 HOTTEST AI STARTUPS OF 2022 (SO FAR)

The AI chip battle is happening at full speed among semiconductor titans like Intel and Nvidia, but there are also many smaller companies making big bets that they too compete.

Nvidia’s GPUs have led the race in specialty AI chips so far, but several AI chip startups are betting they can outmatch the chip juggernaut in a variety of areas, whether it’s cost, efficiency, performance or flexibility for deep learning and machine learning workloads.

[Related: The 10 Hottest Semiconductor Startups Of 2020]

The global market for AI accelerator chips is expected to grow 35 percent annually from $8 billion in 2019 to $70 billion in 2026, according to a report by research firm Global Market Insights this year, underscoring the massive amount of cash and growth opportunities that are at stake in the AI chip war.

What follows is a roundup of the 10 coolest AI chip startups of 2020 so far, based on recent milestones the companies have made like funding rounds, product launches or performance records.

Get more of CRN’s 2020 tech year in review.

Blaize

CEO: Dinakar Munagala

Blaize said its Graph Streaming Processor is the first to run multiple artificial intelligence models and workflows on a single system at the same time. The El Dorado, Calif.-based startup debuted its computing architecture at the beginning of the year after emerging from stealth mode last fall with $87 million in funding from investors. In August, the company revealed its first commercial products, the Xplorer X1600E and Xplorer X1600P for edge servers as well as the Pathfinder P1600 for small edge devices. The startup said its Graph Streaming Processor overcomes barriers in AI processing cost and size, providing 10 to 100 times greater efficiency over existing solutions.

Cerebras Systems

CEO: Andrew Feldman

Cerebras Systems said its Wafer Scale Engine processor is the largest chip ever built, measuring at 1.2 trillion transistors and packing 400,000 compute cores. The chip powers the Los Altos, Calif.-based startup’s CS-1 system, which it said in November is more than 10,000 times faster than a GPU for running neural networks. Since the startup debuted its WCE chip and CS-1 system last year, it has landed big deals to provide its CS-1 systems to the U.S. Department of Energy’s Argonne National Laboratory and the National Science Foundation’s Pittsburgh Supercomputing Center.

Flex Logix Technologies

CEO: Geoff Tate

Flex Logix Technologies said its InferX X1 chip is the world’s fastest for inference at the edge and can provide up to 11 times the throughput of Nvidia’s Jetson Xavier chip at one-seventh the size and a much lower cost. The Mountain View, Calif.-based startup said in October that it is bringing its InferX X1 chip to market with the InferX X1P1 and X1P4 PCIe boards and the InferX X1M M.2 board, which the company said provides higher throughput per dollar for lower price point servers. The company also unveiled a suit of software tools for optimization and application support.

Graphcore

CEO: Nigel Toon

Graphcore said its Intelligence Processing Unit chip is the first processor designed from the ground up for machine intelligence. The Bristol, U.K.-based startup in June unveiled its new Colossus MK2 IPU and the corresponding M2000 system, which is equipped with four MK2 IPUs. The startup said eight M2000s can outperform Nvidia’s DGX A100 systems in FP32 compute math by more than 12 times and AI compute by more than three times while only costing a total of 30 percent more. The company has racked up channel partnerships with several system builders and OEMs, including Penguin Computing, Lambda, Dell Technologies and Atos as part of its Global Elite Partner Program.

Hailo

CEO: Orr Danon

Hailo said its Hailo-8 deep learning chip provides data center-level performance at the edge while beating competing edge processors in size, performance and power consumption. To roll out the processor that launched last year, the Tel Aviv, Israel-based startup said earlier this year it had raised a $60 million Series B funding round from ABB Technology Ventures, the corporate venture arm of Swiss manufacturing multinational ABB, as well as Japanese IT giant NEC Corp. The startup said the Hailo-8’s structure-driven Data Flow Architecture combines high performance, low power and minimal latency to provide up to 26 tera operations per second in edge devices such as smart cameras, smartphones and autonomous vehicles.

Kneron

CEO: Albert Liu

Kneron is developing artificial intelligence chips for edge devices that can adapt to audio and visual recognition applications on the fly. The San Diego-based startup unveiled in August its next-generation chip: the Kneron KL720, which supports full natural language processing and enhanced video processing capabilities. Kneron said the new chip, which is now sampling with device manufacturers, has twice the energy efficiency of Intel’s Movidius AI chips while delivering similar performance at half the cost. In January, the startup said that it had raised an additional $40 million for its Series A, bringing the round’s total to $73 million, thanks to Qualcomm and other investors.

LeapMind

CEO: Soichi Matsuda

LeapMind is getting into the processor IP business with a design for an ultra-low-power AI inference accelerator for ASIC and FPGA circuits that can run AI models in small, 1- to 2-bit data formats at nearly the same accuracy as 8-bit formats. The Tokyo-based startup means AI models running on its IP does not require cutting-edge semiconductor manufacturing processes or the use of specialized cell libraries to maximize the power and space efficiency for inference processing. The company was expected to ship its Efficiera IP in the fall alongside a software development kit and other tools and services.

SambaNova Systems

CEO: Rodrigo Liang

While SambaNova Systems isn’t alone in working on hardware and software simultaneously to propel artificial intelligence workloads, the AI chip startup said its integrated hardware and software stands out in the crowd because of its reconfigurable dataflow architecture. The Palo Alto, Calif.-based startup said this architecture allows applications to take the lead in driving how hardware is optimized to accelerate performance in data centers and at the edge. In February, the startup said that it had raised a $250 million Series C funding round from Intel Capital, BlackRock and other investors to further accelerate its software capabilities.

SiMa.ai

CEO: Krishna Rangasayee

SiMa.ai said its Machine Learning System-on-Chip, or MLSoC for short, is the first chip to combine high performance, low power and hardware security for machine-learning inference. The San Jose, Calif.-based startup said its SoC is designed to be environmentally friendly and efficient, capable of delivering frames per second per watt at a rate that is 30 times greater than what competing offerings can accomplish. To accelerate production and customer delivery, the startup raised a $30 million Series A funding round, announced in May, that was led by Dell Technologies Capital.

Tenstorrent

CEO: Ljubisa Bajic

Tenstorrent said its Grayskull AI processor can unlock new levels of deep learning performance with the industry’s first conditional execution architecture that dynamically eliminates unnecessary computation. The Toronto-based startup revealed Grayskull in April, saying that its new approach to architecture will allow the processor to scale along the continuing growth of AI models by adapting to the exact input of the model and tightly integrating computation and networking. The company has raised at least $33.2 million in capital from investors, including Eclipse Ventures and Real Ventures.