Obscure startup wins prestigious CES 2024 award — you’ve probably never heard of it, but Panmnesia is the company that could make ChatGPT 6 (or 7) times faster

This AI accelerator could make large language models perform dozens of times faster than current-day hardware

When you purchase through links on our site, we may earn an affiliate commission.Here’s how it works.

The highly coveted Innovation Award at the forthcoming Consumer Electronics Show (CES) 2024 event in January has been snapped up by a Korean startup for its AI accelerator.

Panmnesia has built its AI accelerator device on Compute Express Link (CXL) 3.0 technology, which allows an external memory pool to be shared with host computers, and components like the CPU, which can translate to near-limitless memory capacity. This is thanks to the incorporation of a CXL 3.0 controller into the accelerator chip.

CXL is used to connect system devices – including accelerators, memory expanders, processors, and switches. By linking up multiple accelerators and memory expanders using CXL switches, the technology can provide enough memory to an intensive system for AI applications.

What CXL 3.0 means for LLMs

What CXL 3.0 means for LLMs

The use of CXL 2.0 in devices like this would allow particular hosts access to their dedicated portion of pooled external memory, while the latest generation allows hosts to access the entire pool as and when needed.

“We believe that our CXL technology will be a cornerstone for next-generation AI acceleration system," said Panmesia founder and CEO Myoungsoo Jung in astatement.

“We remain committed to our endeavor revolutionizing not only for AI acceleration system, but also other general-purpose environments such as data centers, cloud computing, and high-performance computing.”

Panmnesia’s technology works akin to how clusters of servers may share external SSDs to store data, and would be particularly useful for servers because they’ll often need to access more data that they can hold in the memory that’s in-built.

Are you a pro? Subscribe to our newsletter

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

This device is built specifically for large-scale AI applications – and its creators claim it’s 101 times faster at performing AI-based search functions than conventional services, which useSSDsto store data, linked via networks. The architecture also minimizes energy costs and operational expenditure.

If used in the configuration of servers that the likes ofOpenAIuse to host its large language models (LLMs) such asChatGPT, alongside hardware from other suppliers, it might drastically improve the performance of these models.

More from TechRadar Pro

Keumars Afifi-Sabet is the Technology Editor for Live Science. He has written for a variety of publications including ITPro, The Week Digital and ComputerActive. He has worked as a technology journalist for more than five years, having previously held the role of features editor with ITPro. In his previous role, he oversaw the commissioning and publishing of long form in areas including AI, cyber security, cloud computing and digital transformation.

This new malware utilizes a rare programming language to evade traditional detection methods

Google puts Nvidia on high alert as it showcases Trillium, its rival AI chip, while promising to bring H200 Tensor Core GPUs within days

I fell in love with the cute and compact Hyundai Inster, but it has one major drawback